Dec 07 09:00:41 localhost kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec 07 09:00:41 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 07 09:00:41 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 07 09:00:41 localhost kernel: BIOS-provided physical RAM map:
Dec 07 09:00:41 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 07 09:00:41 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 07 09:00:41 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 07 09:00:41 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 07 09:00:41 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 07 09:00:41 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 07 09:00:41 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 07 09:00:41 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 07 09:00:41 localhost kernel: NX (Execute Disable) protection: active
Dec 07 09:00:41 localhost kernel: APIC: Static calls initialized
Dec 07 09:00:41 localhost kernel: SMBIOS 2.8 present.
Dec 07 09:00:41 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 07 09:00:41 localhost kernel: Hypervisor detected: KVM
Dec 07 09:00:41 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 07 09:00:41 localhost kernel: kvm-clock: using sched offset of 3276239650 cycles
Dec 07 09:00:41 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 07 09:00:41 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 07 09:00:41 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 07 09:00:41 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 07 09:00:41 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 07 09:00:41 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 07 09:00:41 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 07 09:00:41 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 07 09:00:41 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 07 09:00:41 localhost kernel: Using GB pages for direct mapping
Dec 07 09:00:41 localhost kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec 07 09:00:41 localhost kernel: ACPI: Early table checksum verification disabled
Dec 07 09:00:41 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 07 09:00:41 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 07 09:00:41 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 07 09:00:41 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 07 09:00:41 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 07 09:00:41 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 07 09:00:41 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 07 09:00:41 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 07 09:00:41 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 07 09:00:41 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 07 09:00:41 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 07 09:00:41 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 07 09:00:41 localhost kernel: No NUMA configuration found
Dec 07 09:00:41 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 07 09:00:41 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 07 09:00:41 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 07 09:00:41 localhost kernel: Zone ranges:
Dec 07 09:00:41 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 07 09:00:41 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 07 09:00:41 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 07 09:00:41 localhost kernel:   Device   empty
Dec 07 09:00:41 localhost kernel: Movable zone start for each node
Dec 07 09:00:41 localhost kernel: Early memory node ranges
Dec 07 09:00:41 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 07 09:00:41 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 07 09:00:41 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 07 09:00:41 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 07 09:00:41 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 07 09:00:41 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 07 09:00:41 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 07 09:00:41 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 07 09:00:41 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 07 09:00:41 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 07 09:00:41 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 07 09:00:41 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 07 09:00:41 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 07 09:00:41 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 07 09:00:41 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 07 09:00:41 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 07 09:00:41 localhost kernel: TSC deadline timer available
Dec 07 09:00:41 localhost kernel: CPU topo: Max. logical packages:   8
Dec 07 09:00:41 localhost kernel: CPU topo: Max. logical dies:       8
Dec 07 09:00:41 localhost kernel: CPU topo: Max. dies per package:   1
Dec 07 09:00:41 localhost kernel: CPU topo: Max. threads per core:   1
Dec 07 09:00:41 localhost kernel: CPU topo: Num. cores per package:     1
Dec 07 09:00:41 localhost kernel: CPU topo: Num. threads per package:   1
Dec 07 09:00:41 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 07 09:00:41 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 07 09:00:41 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 07 09:00:41 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 07 09:00:41 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 07 09:00:41 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 07 09:00:41 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 07 09:00:41 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 07 09:00:41 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 07 09:00:41 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 07 09:00:41 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 07 09:00:41 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 07 09:00:41 localhost kernel: Booting paravirtualized kernel on KVM
Dec 07 09:00:41 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 07 09:00:41 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 07 09:00:41 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 07 09:00:41 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 07 09:00:41 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 07 09:00:41 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 07 09:00:41 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 07 09:00:41 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec 07 09:00:41 localhost kernel: random: crng init done
Dec 07 09:00:41 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 07 09:00:41 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 07 09:00:41 localhost kernel: Fallback order for Node 0: 0 
Dec 07 09:00:41 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 07 09:00:41 localhost kernel: Policy zone: Normal
Dec 07 09:00:41 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 07 09:00:41 localhost kernel: software IO TLB: area num 8.
Dec 07 09:00:41 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 07 09:00:41 localhost kernel: ftrace: allocating 49335 entries in 193 pages
Dec 07 09:00:41 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 07 09:00:41 localhost kernel: Dynamic Preempt: voluntary
Dec 07 09:00:41 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 07 09:00:41 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 07 09:00:41 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 07 09:00:41 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 07 09:00:41 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 07 09:00:41 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 07 09:00:41 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 07 09:00:41 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 07 09:00:41 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 07 09:00:41 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 07 09:00:41 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 07 09:00:41 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 07 09:00:41 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 07 09:00:41 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 07 09:00:41 localhost kernel: Console: colour VGA+ 80x25
Dec 07 09:00:41 localhost kernel: printk: console [ttyS0] enabled
Dec 07 09:00:41 localhost kernel: ACPI: Core revision 20230331
Dec 07 09:00:41 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 07 09:00:41 localhost kernel: x2apic enabled
Dec 07 09:00:41 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 07 09:00:41 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 07 09:00:41 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 07 09:00:41 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 07 09:00:41 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 07 09:00:41 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 07 09:00:41 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 07 09:00:41 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 07 09:00:41 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 07 09:00:41 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 07 09:00:41 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 07 09:00:41 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 07 09:00:41 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 07 09:00:41 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 07 09:00:41 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 07 09:00:41 localhost kernel: x86/bugs: return thunk changed
Dec 07 09:00:41 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 07 09:00:41 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 07 09:00:41 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 07 09:00:41 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 07 09:00:41 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 07 09:00:41 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 07 09:00:41 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 07 09:00:41 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 07 09:00:41 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 07 09:00:41 localhost kernel: landlock: Up and running.
Dec 07 09:00:41 localhost kernel: Yama: becoming mindful.
Dec 07 09:00:41 localhost kernel: SELinux:  Initializing.
Dec 07 09:00:41 localhost kernel: LSM support for eBPF active
Dec 07 09:00:41 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 07 09:00:41 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 07 09:00:41 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 07 09:00:41 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 07 09:00:41 localhost kernel: ... version:                0
Dec 07 09:00:41 localhost kernel: ... bit width:              48
Dec 07 09:00:41 localhost kernel: ... generic registers:      6
Dec 07 09:00:41 localhost kernel: ... value mask:             0000ffffffffffff
Dec 07 09:00:41 localhost kernel: ... max period:             00007fffffffffff
Dec 07 09:00:41 localhost kernel: ... fixed-purpose events:   0
Dec 07 09:00:41 localhost kernel: ... event mask:             000000000000003f
Dec 07 09:00:41 localhost kernel: signal: max sigframe size: 1776
Dec 07 09:00:41 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 07 09:00:41 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 07 09:00:41 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 07 09:00:41 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 07 09:00:41 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 07 09:00:41 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 07 09:00:41 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 07 09:00:41 localhost kernel: node 0 deferred pages initialised in 12ms
Dec 07 09:00:41 localhost kernel: Memory: 7763740K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618208K reserved, 0K cma-reserved)
Dec 07 09:00:41 localhost kernel: devtmpfs: initialized
Dec 07 09:00:41 localhost kernel: x86/mm: Memory block size: 128MB
Dec 07 09:00:41 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 07 09:00:41 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 07 09:00:41 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 07 09:00:41 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 07 09:00:41 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 07 09:00:41 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 07 09:00:41 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 07 09:00:41 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 07 09:00:41 localhost kernel: audit: type=2000 audit(1765098039.267:1): state=initialized audit_enabled=0 res=1
Dec 07 09:00:41 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 07 09:00:41 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 07 09:00:41 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 07 09:00:41 localhost kernel: cpuidle: using governor menu
Dec 07 09:00:41 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 07 09:00:41 localhost kernel: PCI: Using configuration type 1 for base access
Dec 07 09:00:41 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 07 09:00:41 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 07 09:00:41 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 07 09:00:41 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 07 09:00:41 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 07 09:00:41 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 07 09:00:41 localhost kernel: Demotion targets for Node 0: null
Dec 07 09:00:41 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 07 09:00:41 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 07 09:00:41 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 07 09:00:41 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 07 09:00:41 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 07 09:00:41 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 07 09:00:41 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 07 09:00:41 localhost kernel: ACPI: Interpreter enabled
Dec 07 09:00:41 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 07 09:00:41 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 07 09:00:41 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 07 09:00:41 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 07 09:00:41 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 07 09:00:41 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 07 09:00:41 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [3] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [4] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [5] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [6] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [7] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [8] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [9] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [10] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [11] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [12] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [13] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [14] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [15] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [16] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [17] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [18] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [19] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [20] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [21] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [22] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [23] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [24] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [25] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [26] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [27] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [28] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [29] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [30] registered
Dec 07 09:00:41 localhost kernel: acpiphp: Slot [31] registered
Dec 07 09:00:41 localhost kernel: PCI host bridge to bus 0000:00
Dec 07 09:00:41 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 07 09:00:41 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 07 09:00:41 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 07 09:00:41 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 07 09:00:41 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 07 09:00:41 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 07 09:00:41 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 07 09:00:41 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 07 09:00:41 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 07 09:00:41 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 07 09:00:41 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 07 09:00:41 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 07 09:00:41 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 07 09:00:41 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 07 09:00:41 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 07 09:00:41 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 07 09:00:41 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 07 09:00:41 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 07 09:00:41 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 07 09:00:41 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 07 09:00:41 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 07 09:00:41 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 07 09:00:41 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 07 09:00:41 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 07 09:00:41 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 07 09:00:41 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 07 09:00:41 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 07 09:00:41 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 07 09:00:41 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 07 09:00:41 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 07 09:00:41 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 07 09:00:41 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 07 09:00:41 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 07 09:00:41 localhost kernel: iommu: Default domain type: Translated
Dec 07 09:00:41 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 07 09:00:41 localhost kernel: SCSI subsystem initialized
Dec 07 09:00:41 localhost kernel: ACPI: bus type USB registered
Dec 07 09:00:41 localhost kernel: usbcore: registered new interface driver usbfs
Dec 07 09:00:41 localhost kernel: usbcore: registered new interface driver hub
Dec 07 09:00:41 localhost kernel: usbcore: registered new device driver usb
Dec 07 09:00:41 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 07 09:00:41 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 07 09:00:41 localhost kernel: PTP clock support registered
Dec 07 09:00:41 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 07 09:00:41 localhost kernel: NetLabel: Initializing
Dec 07 09:00:41 localhost kernel: NetLabel:  domain hash size = 128
Dec 07 09:00:41 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 07 09:00:41 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 07 09:00:41 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 07 09:00:41 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 07 09:00:41 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 07 09:00:41 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 07 09:00:41 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 07 09:00:41 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 07 09:00:41 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 07 09:00:41 localhost kernel: vgaarb: loaded
Dec 07 09:00:41 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 07 09:00:41 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 07 09:00:41 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 07 09:00:41 localhost kernel: pnp: PnP ACPI init
Dec 07 09:00:41 localhost kernel: pnp 00:03: [dma 2]
Dec 07 09:00:41 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 07 09:00:41 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 07 09:00:41 localhost kernel: NET: Registered PF_INET protocol family
Dec 07 09:00:41 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 07 09:00:41 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 07 09:00:41 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 07 09:00:41 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 07 09:00:41 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 07 09:00:41 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 07 09:00:41 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 07 09:00:41 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 07 09:00:41 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 07 09:00:41 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 07 09:00:41 localhost kernel: NET: Registered PF_XDP protocol family
Dec 07 09:00:41 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 07 09:00:41 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 07 09:00:41 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 07 09:00:41 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 07 09:00:41 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 07 09:00:41 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 07 09:00:41 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 07 09:00:41 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 74629 usecs
Dec 07 09:00:41 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 07 09:00:41 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 07 09:00:41 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 07 09:00:41 localhost kernel: ACPI: bus type thunderbolt registered
Dec 07 09:00:41 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 07 09:00:41 localhost kernel: Initialise system trusted keyrings
Dec 07 09:00:41 localhost kernel: Key type blacklist registered
Dec 07 09:00:41 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 07 09:00:41 localhost kernel: zbud: loaded
Dec 07 09:00:41 localhost kernel: integrity: Platform Keyring initialized
Dec 07 09:00:41 localhost kernel: integrity: Machine keyring initialized
Dec 07 09:00:41 localhost kernel: Freeing initrd memory: 87804K
Dec 07 09:00:41 localhost kernel: NET: Registered PF_ALG protocol family
Dec 07 09:00:41 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 07 09:00:41 localhost kernel: Key type asymmetric registered
Dec 07 09:00:41 localhost kernel: Asymmetric key parser 'x509' registered
Dec 07 09:00:41 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 07 09:00:41 localhost kernel: io scheduler mq-deadline registered
Dec 07 09:00:41 localhost kernel: io scheduler kyber registered
Dec 07 09:00:41 localhost kernel: io scheduler bfq registered
Dec 07 09:00:41 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 07 09:00:41 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 07 09:00:41 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 07 09:00:41 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 07 09:00:41 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 07 09:00:41 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 07 09:00:41 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 07 09:00:41 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 07 09:00:41 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 07 09:00:41 localhost kernel: Non-volatile memory driver v1.3
Dec 07 09:00:41 localhost kernel: rdac: device handler registered
Dec 07 09:00:41 localhost kernel: hp_sw: device handler registered
Dec 07 09:00:41 localhost kernel: emc: device handler registered
Dec 07 09:00:41 localhost kernel: alua: device handler registered
Dec 07 09:00:41 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 07 09:00:41 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 07 09:00:41 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 07 09:00:41 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 07 09:00:41 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 07 09:00:41 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 07 09:00:41 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 07 09:00:41 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec 07 09:00:41 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 07 09:00:41 localhost kernel: hub 1-0:1.0: USB hub found
Dec 07 09:00:41 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 07 09:00:41 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 07 09:00:41 localhost kernel: usbserial: USB Serial support registered for generic
Dec 07 09:00:41 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 07 09:00:41 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 07 09:00:41 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 07 09:00:41 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 07 09:00:41 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 07 09:00:41 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 07 09:00:41 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 07 09:00:41 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 07 09:00:41 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 07 09:00:41 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-07T09:00:40 UTC (1765098040)
Dec 07 09:00:41 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 07 09:00:41 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 07 09:00:41 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 07 09:00:41 localhost kernel: usbcore: registered new interface driver usbhid
Dec 07 09:00:41 localhost kernel: usbhid: USB HID core driver
Dec 07 09:00:41 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 07 09:00:41 localhost kernel: Initializing XFRM netlink socket
Dec 07 09:00:41 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 07 09:00:41 localhost kernel: Segment Routing with IPv6
Dec 07 09:00:41 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 07 09:00:41 localhost kernel: mpls_gso: MPLS GSO support
Dec 07 09:00:41 localhost kernel: IPI shorthand broadcast: enabled
Dec 07 09:00:41 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 07 09:00:41 localhost kernel: AES CTR mode by8 optimization enabled
Dec 07 09:00:41 localhost kernel: sched_clock: Marking stable (1185006333, 154256495)->(1424752789, -85489961)
Dec 07 09:00:41 localhost kernel: registered taskstats version 1
Dec 07 09:00:41 localhost kernel: Loading compiled-in X.509 certificates
Dec 07 09:00:41 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 07 09:00:41 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 07 09:00:41 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 07 09:00:41 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 07 09:00:41 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 07 09:00:41 localhost kernel: Demotion targets for Node 0: null
Dec 07 09:00:41 localhost kernel: page_owner is disabled
Dec 07 09:00:41 localhost kernel: Key type .fscrypt registered
Dec 07 09:00:41 localhost kernel: Key type fscrypt-provisioning registered
Dec 07 09:00:41 localhost kernel: Key type big_key registered
Dec 07 09:00:41 localhost kernel: Key type encrypted registered
Dec 07 09:00:41 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 07 09:00:41 localhost kernel: Loading compiled-in module X.509 certificates
Dec 07 09:00:41 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 07 09:00:41 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 07 09:00:41 localhost kernel: ima: No architecture policies found
Dec 07 09:00:41 localhost kernel: evm: Initialising EVM extended attributes:
Dec 07 09:00:41 localhost kernel: evm: security.selinux
Dec 07 09:00:41 localhost kernel: evm: security.SMACK64 (disabled)
Dec 07 09:00:41 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 07 09:00:41 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 07 09:00:41 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 07 09:00:41 localhost kernel: evm: security.apparmor (disabled)
Dec 07 09:00:41 localhost kernel: evm: security.ima
Dec 07 09:00:41 localhost kernel: evm: security.capability
Dec 07 09:00:41 localhost kernel: evm: HMAC attrs: 0x1
Dec 07 09:00:41 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 07 09:00:41 localhost kernel: Running certificate verification RSA selftest
Dec 07 09:00:41 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 07 09:00:41 localhost kernel: Running certificate verification ECDSA selftest
Dec 07 09:00:41 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 07 09:00:41 localhost kernel: clk: Disabling unused clocks
Dec 07 09:00:41 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 07 09:00:41 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec 07 09:00:41 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 07 09:00:41 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec 07 09:00:41 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 07 09:00:41 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 07 09:00:41 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 07 09:00:41 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 07 09:00:41 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 07 09:00:41 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 07 09:00:41 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 07 09:00:41 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 07 09:00:41 localhost kernel: Run /init as init process
Dec 07 09:00:41 localhost kernel:   with arguments:
Dec 07 09:00:41 localhost kernel:     /init
Dec 07 09:00:41 localhost kernel:   with environment:
Dec 07 09:00:41 localhost kernel:     HOME=/
Dec 07 09:00:41 localhost kernel:     TERM=linux
Dec 07 09:00:41 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64
Dec 07 09:00:41 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 07 09:00:41 localhost systemd[1]: Detected virtualization kvm.
Dec 07 09:00:41 localhost systemd[1]: Detected architecture x86-64.
Dec 07 09:00:41 localhost systemd[1]: Running in initrd.
Dec 07 09:00:41 localhost systemd[1]: No hostname configured, using default hostname.
Dec 07 09:00:41 localhost systemd[1]: Hostname set to <localhost>.
Dec 07 09:00:41 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 07 09:00:41 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 07 09:00:41 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 07 09:00:41 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 07 09:00:41 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 07 09:00:41 localhost systemd[1]: Reached target Local File Systems.
Dec 07 09:00:41 localhost systemd[1]: Reached target Path Units.
Dec 07 09:00:41 localhost systemd[1]: Reached target Slice Units.
Dec 07 09:00:41 localhost systemd[1]: Reached target Swaps.
Dec 07 09:00:41 localhost systemd[1]: Reached target Timer Units.
Dec 07 09:00:41 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 07 09:00:41 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 07 09:00:41 localhost systemd[1]: Listening on Journal Socket.
Dec 07 09:00:41 localhost systemd[1]: Listening on udev Control Socket.
Dec 07 09:00:41 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 07 09:00:41 localhost systemd[1]: Reached target Socket Units.
Dec 07 09:00:41 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 07 09:00:41 localhost systemd[1]: Starting Journal Service...
Dec 07 09:00:41 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 07 09:00:41 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 07 09:00:41 localhost systemd[1]: Starting Create System Users...
Dec 07 09:00:41 localhost systemd[1]: Starting Setup Virtual Console...
Dec 07 09:00:41 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 07 09:00:41 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 07 09:00:41 localhost systemd-journald[305]: Journal started
Dec 07 09:00:41 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/33deaee172d148f6b57d96104f1f436a) is 8.0M, max 153.6M, 145.6M free.
Dec 07 09:00:41 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec 07 09:00:41 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec 07 09:00:41 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 07 09:00:41 localhost systemd[1]: Started Journal Service.
Dec 07 09:00:41 localhost systemd[1]: Finished Create System Users.
Dec 07 09:00:41 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 07 09:00:41 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 07 09:00:41 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 07 09:00:41 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 07 09:00:41 localhost systemd[1]: Finished Setup Virtual Console.
Dec 07 09:00:41 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 07 09:00:41 localhost systemd[1]: Starting dracut cmdline hook...
Dec 07 09:00:41 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Dec 07 09:00:41 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 07 09:00:41 localhost systemd[1]: Finished dracut cmdline hook.
Dec 07 09:00:41 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 07 09:00:41 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 07 09:00:41 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 07 09:00:41 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 07 09:00:41 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 07 09:00:41 localhost kernel: RPC: Registered udp transport module.
Dec 07 09:00:41 localhost kernel: RPC: Registered tcp transport module.
Dec 07 09:00:41 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 07 09:00:41 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 07 09:00:41 localhost rpc.statd[444]: Version 2.5.4 starting
Dec 07 09:00:41 localhost rpc.statd[444]: Initializing NSM state
Dec 07 09:00:41 localhost rpc.idmapd[449]: Setting log level to 0
Dec 07 09:00:41 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 07 09:00:41 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 07 09:00:41 localhost systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Dec 07 09:00:42 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 07 09:00:42 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 07 09:00:42 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 07 09:00:42 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 07 09:00:42 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 07 09:00:42 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 07 09:00:42 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 07 09:00:42 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 07 09:00:42 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 07 09:00:42 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 07 09:00:42 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 07 09:00:42 localhost systemd[1]: Reached target Network.
Dec 07 09:00:42 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 07 09:00:42 localhost systemd[1]: Starting dracut initqueue hook...
Dec 07 09:00:42 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 07 09:00:42 localhost systemd[1]: Reached target System Initialization.
Dec 07 09:00:42 localhost systemd[1]: Reached target Basic System.
Dec 07 09:00:42 localhost kernel: libata version 3.00 loaded.
Dec 07 09:00:42 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 07 09:00:42 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 07 09:00:42 localhost kernel: scsi host0: ata_piix
Dec 07 09:00:42 localhost kernel: scsi host1: ata_piix
Dec 07 09:00:42 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 07 09:00:42 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 07 09:00:42 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 07 09:00:42 localhost kernel:  vda: vda1
Dec 07 09:00:42 localhost kernel: ata1: found unknown device (class 0)
Dec 07 09:00:42 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 07 09:00:42 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 07 09:00:42 localhost systemd-udevd[489]: Network interface NamePolicy= disabled on kernel command line.
Dec 07 09:00:42 localhost systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 07 09:00:42 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 07 09:00:42 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 07 09:00:42 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 07 09:00:42 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 07 09:00:42 localhost systemd[1]: Reached target Initrd Root Device.
Dec 07 09:00:42 localhost systemd[1]: Finished dracut initqueue hook.
Dec 07 09:00:42 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 07 09:00:42 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 07 09:00:42 localhost systemd[1]: Reached target Remote File Systems.
Dec 07 09:00:42 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 07 09:00:42 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 07 09:00:42 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec 07 09:00:42 localhost systemd-fsck[551]: /usr/sbin/fsck.xfs: XFS file system.
Dec 07 09:00:42 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 07 09:00:42 localhost systemd[1]: Mounting /sysroot...
Dec 07 09:00:43 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 07 09:00:43 localhost kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec 07 09:00:43 localhost kernel: XFS (vda1): Ending clean mount
Dec 07 09:00:43 localhost systemd[1]: Mounted /sysroot.
Dec 07 09:00:43 localhost systemd[1]: Reached target Initrd Root File System.
Dec 07 09:00:43 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 07 09:00:43 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 07 09:00:43 localhost systemd[1]: Reached target Initrd File Systems.
Dec 07 09:00:43 localhost systemd[1]: Reached target Initrd Default Target.
Dec 07 09:00:43 localhost systemd[1]: Starting dracut mount hook...
Dec 07 09:00:43 localhost systemd[1]: Finished dracut mount hook.
Dec 07 09:00:43 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 07 09:00:43 localhost rpc.idmapd[449]: exiting on signal 15
Dec 07 09:00:43 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 07 09:00:43 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 07 09:00:43 localhost systemd[1]: Stopped target Network.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Timer Units.
Dec 07 09:00:43 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 07 09:00:43 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Basic System.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Path Units.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Remote File Systems.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Slice Units.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Socket Units.
Dec 07 09:00:43 localhost systemd[1]: Stopped target System Initialization.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Local File Systems.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Swaps.
Dec 07 09:00:43 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped dracut mount hook.
Dec 07 09:00:43 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 07 09:00:43 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 07 09:00:43 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 07 09:00:43 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 07 09:00:43 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 07 09:00:43 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 07 09:00:43 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 07 09:00:43 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 07 09:00:43 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 07 09:00:43 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 07 09:00:43 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 07 09:00:43 localhost systemd[1]: systemd-udevd.service: Consumed 1.063s CPU time.
Dec 07 09:00:43 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 07 09:00:43 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Closed udev Control Socket.
Dec 07 09:00:43 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Closed udev Kernel Socket.
Dec 07 09:00:43 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 07 09:00:43 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 07 09:00:43 localhost systemd[1]: Starting Cleanup udev Database...
Dec 07 09:00:43 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 07 09:00:43 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 07 09:00:43 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Stopped Create System Users.
Dec 07 09:00:43 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 07 09:00:43 localhost systemd[1]: Finished Cleanup udev Database.
Dec 07 09:00:43 localhost systemd[1]: Reached target Switch Root.
Dec 07 09:00:43 localhost systemd[1]: Starting Switch Root...
Dec 07 09:00:43 localhost systemd[1]: Switching root.
Dec 07 09:00:43 localhost systemd-journald[305]: Journal stopped
Dec 07 09:00:44 localhost systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Dec 07 09:00:44 localhost kernel: audit: type=1404 audit(1765098043.745:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 07 09:00:44 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 07 09:00:44 localhost kernel: SELinux:  policy capability open_perms=1
Dec 07 09:00:44 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 07 09:00:44 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 07 09:00:44 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 07 09:00:44 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 07 09:00:44 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 07 09:00:44 localhost kernel: audit: type=1403 audit(1765098043.875:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 07 09:00:44 localhost systemd[1]: Successfully loaded SELinux policy in 134.615ms.
Dec 07 09:00:44 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.734ms.
Dec 07 09:00:44 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 07 09:00:44 localhost systemd[1]: Detected virtualization kvm.
Dec 07 09:00:44 localhost systemd[1]: Detected architecture x86-64.
Dec 07 09:00:44 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:00:44 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 07 09:00:44 localhost systemd[1]: Stopped Switch Root.
Dec 07 09:00:44 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 07 09:00:44 localhost systemd[1]: Created slice Slice /system/getty.
Dec 07 09:00:44 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 07 09:00:44 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 07 09:00:44 localhost systemd[1]: Created slice User and Session Slice.
Dec 07 09:00:44 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 07 09:00:44 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 07 09:00:44 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 07 09:00:44 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 07 09:00:44 localhost systemd[1]: Stopped target Switch Root.
Dec 07 09:00:44 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 07 09:00:44 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 07 09:00:44 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 07 09:00:44 localhost systemd[1]: Reached target Path Units.
Dec 07 09:00:44 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 07 09:00:44 localhost systemd[1]: Reached target Slice Units.
Dec 07 09:00:44 localhost systemd[1]: Reached target Swaps.
Dec 07 09:00:44 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 07 09:00:44 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 07 09:00:44 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 07 09:00:44 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 07 09:00:44 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 07 09:00:44 localhost systemd[1]: Listening on udev Control Socket.
Dec 07 09:00:44 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 07 09:00:44 localhost systemd[1]: Mounting Huge Pages File System...
Dec 07 09:00:44 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 07 09:00:44 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 07 09:00:44 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 07 09:00:44 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 07 09:00:44 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 07 09:00:44 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 07 09:00:44 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 07 09:00:44 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 07 09:00:44 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 07 09:00:44 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 07 09:00:44 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 07 09:00:44 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 07 09:00:44 localhost systemd[1]: Stopped Journal Service.
Dec 07 09:00:44 localhost systemd[1]: Starting Journal Service...
Dec 07 09:00:44 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 07 09:00:44 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 07 09:00:44 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 07 09:00:44 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 07 09:00:44 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 07 09:00:44 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 07 09:00:44 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 07 09:00:44 localhost kernel: fuse: init (API version 7.37)
Dec 07 09:00:44 localhost systemd-journald[678]: Journal started
Dec 07 09:00:44 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 07 09:00:44 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 07 09:00:44 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 07 09:00:44 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 07 09:00:44 localhost systemd[1]: Started Journal Service.
Dec 07 09:00:44 localhost systemd[1]: Mounted Huge Pages File System.
Dec 07 09:00:44 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 07 09:00:44 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 07 09:00:44 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 07 09:00:44 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 07 09:00:44 localhost kernel: ACPI: bus type drm_connector registered
Dec 07 09:00:44 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 07 09:00:44 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 07 09:00:44 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 07 09:00:44 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 07 09:00:44 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 07 09:00:44 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 07 09:00:44 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 07 09:00:44 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 07 09:00:44 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 07 09:00:44 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 07 09:00:44 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 07 09:00:44 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 07 09:00:44 localhost systemd[1]: Mounting FUSE Control File System...
Dec 07 09:00:44 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 07 09:00:44 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 07 09:00:44 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 07 09:00:44 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 07 09:00:44 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 07 09:00:44 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 07 09:00:44 localhost systemd-journald[678]: Received client request to flush runtime journal.
Dec 07 09:00:44 localhost systemd[1]: Starting Create System Users...
Dec 07 09:00:44 localhost systemd[1]: Mounted FUSE Control File System.
Dec 07 09:00:44 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 07 09:00:44 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 07 09:00:44 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 07 09:00:44 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 07 09:00:44 localhost systemd[1]: Finished Create System Users.
Dec 07 09:00:44 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 07 09:00:44 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 07 09:00:44 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 07 09:00:44 localhost systemd[1]: Reached target Local File Systems.
Dec 07 09:00:44 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 07 09:00:44 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 07 09:00:44 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 07 09:00:44 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 07 09:00:44 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 07 09:00:44 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 07 09:00:44 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 07 09:00:44 localhost bootctl[696]: Couldn't find EFI system partition, skipping.
Dec 07 09:00:44 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 07 09:00:44 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 07 09:00:44 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 07 09:00:44 localhost systemd[1]: Starting Security Auditing Service...
Dec 07 09:00:44 localhost systemd[1]: Starting RPC Bind...
Dec 07 09:00:44 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 07 09:00:44 localhost auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 07 09:00:44 localhost auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 07 09:00:44 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 07 09:00:44 localhost augenrules[708]: /sbin/augenrules: No change
Dec 07 09:00:44 localhost systemd[1]: Started RPC Bind.
Dec 07 09:00:44 localhost augenrules[723]: No rules
Dec 07 09:00:44 localhost augenrules[723]: enabled 1
Dec 07 09:00:44 localhost augenrules[723]: failure 1
Dec 07 09:00:44 localhost augenrules[723]: pid 703
Dec 07 09:00:44 localhost augenrules[723]: rate_limit 0
Dec 07 09:00:44 localhost augenrules[723]: backlog_limit 8192
Dec 07 09:00:44 localhost augenrules[723]: lost 0
Dec 07 09:00:44 localhost augenrules[723]: backlog 4
Dec 07 09:00:44 localhost augenrules[723]: backlog_wait_time 60000
Dec 07 09:00:44 localhost augenrules[723]: backlog_wait_time_actual 0
Dec 07 09:00:44 localhost augenrules[723]: enabled 1
Dec 07 09:00:44 localhost augenrules[723]: failure 1
Dec 07 09:00:44 localhost augenrules[723]: pid 703
Dec 07 09:00:44 localhost augenrules[723]: rate_limit 0
Dec 07 09:00:44 localhost augenrules[723]: backlog_limit 8192
Dec 07 09:00:44 localhost augenrules[723]: lost 0
Dec 07 09:00:44 localhost augenrules[723]: backlog 4
Dec 07 09:00:44 localhost augenrules[723]: backlog_wait_time 60000
Dec 07 09:00:44 localhost augenrules[723]: backlog_wait_time_actual 0
Dec 07 09:00:44 localhost augenrules[723]: enabled 1
Dec 07 09:00:44 localhost augenrules[723]: failure 1
Dec 07 09:00:44 localhost augenrules[723]: pid 703
Dec 07 09:00:44 localhost augenrules[723]: rate_limit 0
Dec 07 09:00:44 localhost augenrules[723]: backlog_limit 8192
Dec 07 09:00:44 localhost augenrules[723]: lost 0
Dec 07 09:00:44 localhost augenrules[723]: backlog 2
Dec 07 09:00:44 localhost augenrules[723]: backlog_wait_time 60000
Dec 07 09:00:44 localhost augenrules[723]: backlog_wait_time_actual 0
Dec 07 09:00:44 localhost systemd[1]: Started Security Auditing Service.
Dec 07 09:00:44 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 07 09:00:44 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 07 09:00:45 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 07 09:00:45 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 07 09:00:45 localhost systemd[1]: Starting Update is Completed...
Dec 07 09:00:45 localhost systemd[1]: Finished Update is Completed.
Dec 07 09:00:45 localhost systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Dec 07 09:00:45 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 07 09:00:45 localhost systemd[1]: Reached target System Initialization.
Dec 07 09:00:45 localhost systemd[1]: Started dnf makecache --timer.
Dec 07 09:00:45 localhost systemd[1]: Started Daily rotation of log files.
Dec 07 09:00:45 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 07 09:00:45 localhost systemd[1]: Reached target Timer Units.
Dec 07 09:00:45 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 07 09:00:45 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 07 09:00:45 localhost systemd[1]: Reached target Socket Units.
Dec 07 09:00:45 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 07 09:00:45 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 07 09:00:45 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 07 09:00:45 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 07 09:00:45 localhost systemd-udevd[742]: Network interface NamePolicy= disabled on kernel command line.
Dec 07 09:00:45 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 07 09:00:45 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 07 09:00:45 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 07 09:00:45 localhost systemd[1]: Reached target Basic System.
Dec 07 09:00:45 localhost dbus-broker-lau[740]: Ready
Dec 07 09:00:45 localhost systemd[1]: Starting NTP client/server...
Dec 07 09:00:45 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 07 09:00:45 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 07 09:00:45 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 07 09:00:45 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 07 09:00:45 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 07 09:00:45 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 07 09:00:45 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 07 09:00:45 localhost chronyd[789]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 07 09:00:45 localhost chronyd[789]: Loaded 0 symmetric keys
Dec 07 09:00:45 localhost chronyd[789]: Using right/UTC timezone to obtain leap second data
Dec 07 09:00:45 localhost chronyd[789]: Loaded seccomp filter (level 2)
Dec 07 09:00:45 localhost systemd[1]: Started irqbalance daemon.
Dec 07 09:00:45 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 07 09:00:45 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 07 09:00:45 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 07 09:00:45 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 07 09:00:45 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 07 09:00:45 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 07 09:00:45 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 07 09:00:45 localhost systemd[1]: Starting User Login Management...
Dec 07 09:00:45 localhost systemd[1]: Started NTP client/server.
Dec 07 09:00:45 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 07 09:00:45 localhost systemd-logind[796]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 07 09:00:45 localhost systemd-logind[796]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 07 09:00:45 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 07 09:00:45 localhost systemd-logind[796]: New seat seat0.
Dec 07 09:00:45 localhost systemd[1]: Started User Login Management.
Dec 07 09:00:45 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 07 09:00:45 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 07 09:00:45 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 07 09:00:45 localhost kernel: Console: switching to colour dummy device 80x25
Dec 07 09:00:45 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 07 09:00:45 localhost kernel: [drm] features: -context_init
Dec 07 09:00:45 localhost kernel: [drm] number of scanouts: 1
Dec 07 09:00:45 localhost kernel: [drm] number of cap sets: 0
Dec 07 09:00:45 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 07 09:00:45 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 07 09:00:45 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 07 09:00:45 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 07 09:00:45 localhost kernel: kvm_amd: TSC scaling supported
Dec 07 09:00:45 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 07 09:00:45 localhost kernel: kvm_amd: Nested Paging enabled
Dec 07 09:00:45 localhost kernel: kvm_amd: LBR virtualization supported
Dec 07 09:00:45 localhost iptables.init[779]: iptables: Applying firewall rules: [  OK  ]
Dec 07 09:00:45 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 07 09:00:45 localhost cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sun, 07 Dec 2025 09:00:45 +0000. Up 6.30 seconds.
Dec 07 09:00:45 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 07 09:00:45 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 07 09:00:45 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpdkd5okxr.mount: Deactivated successfully.
Dec 07 09:00:45 localhost systemd[1]: Starting Hostname Service...
Dec 07 09:00:46 localhost systemd[1]: Started Hostname Service.
Dec 07 09:00:46 np0005549475.novalocal systemd-hostnamed[853]: Hostname set to <np0005549475.novalocal> (static)
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Reached target Preparation for Network.
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Starting Network Manager...
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2425] NetworkManager (version 1.54.1-1.el9) is starting... (boot:7c26b365-f356-47da-bd1a-0ae584570406)
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2430] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2511] manager[0x565032c94080]: monitoring kernel firmware directory '/lib/firmware'.
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2552] hostname: hostname: using hostnamed
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2552] hostname: static hostname changed from (none) to "np0005549475.novalocal"
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2557] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2675] manager[0x565032c94080]: rfkill: Wi-Fi hardware radio set enabled
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2676] manager[0x565032c94080]: rfkill: WWAN hardware radio set enabled
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2745] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2745] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2747] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2749] manager: Networking is enabled by state file
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2752] settings: Loaded settings plugin: keyfile (internal)
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2770] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2802] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2823] dhcp: init: Using DHCP client 'internal'
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2828] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2849] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2859] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2870] device (lo): Activation: starting connection 'lo' (9cb2e8de-c1b3-45af-836f-38efb3fd24ca)
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2883] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2888] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Started Network Manager.
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2946] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Reached target Network.
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2951] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2953] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2955] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2957] device (eth0): carrier: link connected
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2960] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2968] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2977] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2981] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2982] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2984] manager: NetworkManager state is now CONNECTING
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2985] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.2996] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.3000] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.3099] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.3103] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 07 09:00:46 np0005549475.novalocal NetworkManager[857]: <info>  [1765098046.3115] device (lo): Activation: successful, device activated.
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Reached target NFS client services.
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: Reached target Remote File Systems.
Dec 07 09:00:46 np0005549475.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 07 09:00:47 np0005549475.novalocal NetworkManager[857]: <info>  [1765098047.6237] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Dec 07 09:00:47 np0005549475.novalocal NetworkManager[857]: <info>  [1765098047.6253] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 07 09:00:47 np0005549475.novalocal NetworkManager[857]: <info>  [1765098047.6282] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:00:47 np0005549475.novalocal NetworkManager[857]: <info>  [1765098047.6310] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:00:47 np0005549475.novalocal NetworkManager[857]: <info>  [1765098047.6313] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:00:47 np0005549475.novalocal NetworkManager[857]: <info>  [1765098047.6317] manager: NetworkManager state is now CONNECTED_SITE
Dec 07 09:00:47 np0005549475.novalocal NetworkManager[857]: <info>  [1765098047.6324] device (eth0): Activation: successful, device activated.
Dec 07 09:00:47 np0005549475.novalocal NetworkManager[857]: <info>  [1765098047.6336] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 07 09:00:47 np0005549475.novalocal NetworkManager[857]: <info>  [1765098047.6342] manager: startup complete
Dec 07 09:00:47 np0005549475.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 07 09:00:47 np0005549475.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sun, 07 Dec 2025 09:00:47 +0000. Up 8.58 seconds.
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: |  eth0  | True |         38.102.83.74         | 255.255.255.0 | global | fa:16:3e:00:b4:ff |
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe00:b4ff/64 |       .       |  link  | fa:16:3e:00:b4:ff |
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 07 09:00:47 np0005549475.novalocal cloud-init[920]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Dec 07 09:00:48 np0005549475.novalocal cloud-init[920]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Dec 07 09:00:48 np0005549475.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 07 09:00:48 np0005549475.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Dec 07 09:00:48 np0005549475.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 07 09:00:48 np0005549475.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Dec 07 09:00:48 np0005549475.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Dec 07 09:00:48 np0005549475.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Dec 07 09:00:48 np0005549475.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: Generating public/private rsa key pair.
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: The key fingerprint is:
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: SHA256:EkC7HAMaXm24fIZjzjoSCYWtFBieStNmyHOlAECrl5o root@np0005549475.novalocal
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: The key's randomart image is:
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: +---[RSA 3072]----+
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |XBoo=.           |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |*+Xoo=           |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |o&.*B .          |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |* *O * .         |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |+.* * . S        |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |o+ o   .         |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |E..              |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |.o               |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |. .              |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: +----[SHA256]-----+
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: Generating public/private ecdsa key pair.
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: The key fingerprint is:
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: SHA256:/lMiFpL9kGjGBD+pIKL/IYSgY+1bmk3j/L8l9XP8jV8 root@np0005549475.novalocal
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: The key's randomart image is:
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: +---[ECDSA 256]---+
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |    ..           |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |     ...         |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |+ .  o++ .       |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |=o.. .B.=        |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |=....o .S+ .     |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |.+.    .o + o .  |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |  o..+ ..o + o oE|
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |   oX..  .+   o.+|
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |   +.+...oo.  ..+|
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: +----[SHA256]-----+
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: Generating public/private ed25519 key pair.
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: The key fingerprint is:
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: SHA256:Gn5LLNnW+5iWAv8DeNGzifjannvLRsRx9S639a5G7N0 root@np0005549475.novalocal
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: The key's randomart image is:
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: +--[ED25519 256]--+
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |             ..  |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |          . .  . |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |         o o    .|
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |        . =    . |
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |      .oS+ + .. +|
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |     .+*+.+   oo+|
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |      ==*o.. o oo|
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |       *++*+  o.E|
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: |      .o*BB+.....|
Dec 07 09:00:49 np0005549475.novalocal cloud-init[920]: +----[SHA256]-----+
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Reached target Network is Online.
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Starting System Logging Service...
Dec 07 09:00:49 np0005549475.novalocal sm-notify[1005]: Version 2.5.4 starting
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Starting Permit User Sessions...
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 07 09:00:49 np0005549475.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Dec 07 09:00:49 np0005549475.novalocal sshd[1007]: Server listening on :: port 22.
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Finished Permit User Sessions.
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Started Command Scheduler.
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Started Getty on tty1.
Dec 07 09:00:49 np0005549475.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Dec 07 09:00:49 np0005549475.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 07 09:00:49 np0005549475.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 21% if used.)
Dec 07 09:00:49 np0005549475.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Reached target Login Prompts.
Dec 07 09:00:49 np0005549475.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Dec 07 09:00:49 np0005549475.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Started System Logging Service.
Dec 07 09:00:49 np0005549475.novalocal sshd-session[1023]: Unable to negotiate with 38.102.83.114 port 60026: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 07 09:00:49 np0005549475.novalocal sshd-session[1014]: Connection closed by 38.102.83.114 port 60014 [preauth]
Dec 07 09:00:49 np0005549475.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Reached target Multi-User System.
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 07 09:00:49 np0005549475.novalocal sshd-session[1037]: Connection reset by 38.102.83.114 port 60042 [preauth]
Dec 07 09:00:49 np0005549475.novalocal sshd-session[1051]: Unable to negotiate with 38.102.83.114 port 60050: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 07 09:00:49 np0005549475.novalocal cloud-init[1036]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sun, 07 Dec 2025 09:00:49 +0000. Up 10.16 seconds.
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 07 09:00:49 np0005549475.novalocal sshd-session[1055]: Unable to negotiate with 38.102.83.114 port 60066: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 07 09:00:49 np0005549475.novalocal sshd-session[1076]: Connection reset by 38.102.83.114 port 60094 [preauth]
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 07 09:00:49 np0005549475.novalocal sshd-session[1081]: Unable to negotiate with 38.102.83.114 port 60096: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 07 09:00:49 np0005549475.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 07 09:00:49 np0005549475.novalocal kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Dec 07 09:00:49 np0005549475.novalocal kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec 07 09:00:49 np0005549475.novalocal sshd-session[1086]: Unable to negotiate with 38.102.83.114 port 60102: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 07 09:00:49 np0005549475.novalocal sshd-session[1060]: Connection closed by 38.102.83.114 port 60080 [preauth]
Dec 07 09:00:50 np0005549475.novalocal cloud-init[1208]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sun, 07 Dec 2025 09:00:50 +0000. Up 10.84 seconds.
Dec 07 09:00:50 np0005549475.novalocal cloud-init[1236]: #############################################################
Dec 07 09:00:50 np0005549475.novalocal cloud-init[1240]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 07 09:00:50 np0005549475.novalocal cloud-init[1244]: 256 SHA256:/lMiFpL9kGjGBD+pIKL/IYSgY+1bmk3j/L8l9XP8jV8 root@np0005549475.novalocal (ECDSA)
Dec 07 09:00:50 np0005549475.novalocal cloud-init[1250]: 256 SHA256:Gn5LLNnW+5iWAv8DeNGzifjannvLRsRx9S639a5G7N0 root@np0005549475.novalocal (ED25519)
Dec 07 09:00:50 np0005549475.novalocal cloud-init[1255]: 3072 SHA256:EkC7HAMaXm24fIZjzjoSCYWtFBieStNmyHOlAECrl5o root@np0005549475.novalocal (RSA)
Dec 07 09:00:50 np0005549475.novalocal cloud-init[1256]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 07 09:00:50 np0005549475.novalocal cloud-init[1258]: #############################################################
Dec 07 09:00:50 np0005549475.novalocal cloud-init[1208]: Cloud-init v. 24.4-7.el9 finished at Sun, 07 Dec 2025 09:00:50 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.06 seconds
Dec 07 09:00:50 np0005549475.novalocal dracut[1301]: dracut-057-102.git20250818.el9
Dec 07 09:00:50 np0005549475.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 07 09:00:50 np0005549475.novalocal systemd[1]: Reached target Cloud-init target.
Dec 07 09:00:50 np0005549475.novalocal dracut[1303]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 07 09:00:51 np0005549475.novalocal dracut[1303]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: memstrack is not available
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: memstrack is not available
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: *** Including module: systemd ***
Dec 07 09:00:52 np0005549475.novalocal chronyd[789]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Dec 07 09:00:52 np0005549475.novalocal chronyd[789]: System clock TAI offset set to 37 seconds
Dec 07 09:00:52 np0005549475.novalocal dracut[1303]: *** Including module: fips ***
Dec 07 09:00:53 np0005549475.novalocal dracut[1303]: *** Including module: systemd-initrd ***
Dec 07 09:00:53 np0005549475.novalocal dracut[1303]: *** Including module: i18n ***
Dec 07 09:00:53 np0005549475.novalocal dracut[1303]: *** Including module: drm ***
Dec 07 09:00:53 np0005549475.novalocal dracut[1303]: *** Including module: prefixdevname ***
Dec 07 09:00:53 np0005549475.novalocal dracut[1303]: *** Including module: kernel-modules ***
Dec 07 09:00:53 np0005549475.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 07 09:00:54 np0005549475.novalocal dracut[1303]: *** Including module: kernel-modules-extra ***
Dec 07 09:00:54 np0005549475.novalocal dracut[1303]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 07 09:00:54 np0005549475.novalocal dracut[1303]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 07 09:00:54 np0005549475.novalocal dracut[1303]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 07 09:00:54 np0005549475.novalocal dracut[1303]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 07 09:00:54 np0005549475.novalocal dracut[1303]: *** Including module: qemu ***
Dec 07 09:00:54 np0005549475.novalocal dracut[1303]: *** Including module: fstab-sys ***
Dec 07 09:00:54 np0005549475.novalocal dracut[1303]: *** Including module: rootfs-block ***
Dec 07 09:00:54 np0005549475.novalocal dracut[1303]: *** Including module: terminfo ***
Dec 07 09:00:54 np0005549475.novalocal dracut[1303]: *** Including module: udev-rules ***
Dec 07 09:00:55 np0005549475.novalocal dracut[1303]: Skipping udev rule: 91-permissions.rules
Dec 07 09:00:55 np0005549475.novalocal dracut[1303]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 07 09:00:55 np0005549475.novalocal dracut[1303]: *** Including module: virtiofs ***
Dec 07 09:00:55 np0005549475.novalocal dracut[1303]: *** Including module: dracut-systemd ***
Dec 07 09:00:55 np0005549475.novalocal irqbalance[786]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 07 09:00:55 np0005549475.novalocal irqbalance[786]: IRQ 25 affinity is now unmanaged
Dec 07 09:00:55 np0005549475.novalocal irqbalance[786]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 07 09:00:55 np0005549475.novalocal irqbalance[786]: IRQ 31 affinity is now unmanaged
Dec 07 09:00:55 np0005549475.novalocal irqbalance[786]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 07 09:00:55 np0005549475.novalocal irqbalance[786]: IRQ 28 affinity is now unmanaged
Dec 07 09:00:55 np0005549475.novalocal irqbalance[786]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 07 09:00:55 np0005549475.novalocal irqbalance[786]: IRQ 32 affinity is now unmanaged
Dec 07 09:00:55 np0005549475.novalocal irqbalance[786]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 07 09:00:55 np0005549475.novalocal irqbalance[786]: IRQ 30 affinity is now unmanaged
Dec 07 09:00:55 np0005549475.novalocal irqbalance[786]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 07 09:00:55 np0005549475.novalocal irqbalance[786]: IRQ 29 affinity is now unmanaged
Dec 07 09:00:55 np0005549475.novalocal dracut[1303]: *** Including module: usrmount ***
Dec 07 09:00:55 np0005549475.novalocal dracut[1303]: *** Including module: base ***
Dec 07 09:00:55 np0005549475.novalocal dracut[1303]: *** Including module: fs-lib ***
Dec 07 09:00:55 np0005549475.novalocal dracut[1303]: *** Including module: kdumpbase ***
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:   microcode_ctl module: mangling fw_dir
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: configuration "intel" is ignored
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]: *** Including module: openssl ***
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]: *** Including module: shutdown ***
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]: *** Including module: squash ***
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]: *** Including modules done ***
Dec 07 09:00:56 np0005549475.novalocal dracut[1303]: *** Installing kernel module dependencies ***
Dec 07 09:00:57 np0005549475.novalocal dracut[1303]: *** Installing kernel module dependencies done ***
Dec 07 09:00:57 np0005549475.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 07 09:00:57 np0005549475.novalocal dracut[1303]: *** Resolving executable dependencies ***
Dec 07 09:00:59 np0005549475.novalocal dracut[1303]: *** Resolving executable dependencies done ***
Dec 07 09:00:59 np0005549475.novalocal dracut[1303]: *** Generating early-microcode cpio image ***
Dec 07 09:00:59 np0005549475.novalocal dracut[1303]: *** Store current command line parameters ***
Dec 07 09:00:59 np0005549475.novalocal dracut[1303]: Stored kernel commandline:
Dec 07 09:00:59 np0005549475.novalocal dracut[1303]: No dracut internal kernel commandline stored in the initramfs
Dec 07 09:00:59 np0005549475.novalocal dracut[1303]: *** Install squash loader ***
Dec 07 09:01:00 np0005549475.novalocal dracut[1303]: *** Squashing the files inside the initramfs ***
Dec 07 09:01:01 np0005549475.novalocal CROND[4144]: (root) CMD (run-parts /etc/cron.hourly)
Dec 07 09:01:01 np0005549475.novalocal run-parts[4147]: (/etc/cron.hourly) starting 0anacron
Dec 07 09:01:01 np0005549475.novalocal anacron[4155]: Anacron started on 2025-12-07
Dec 07 09:01:01 np0005549475.novalocal anacron[4155]: Will run job `cron.daily' in 15 min.
Dec 07 09:01:01 np0005549475.novalocal anacron[4155]: Will run job `cron.weekly' in 35 min.
Dec 07 09:01:01 np0005549475.novalocal anacron[4155]: Will run job `cron.monthly' in 55 min.
Dec 07 09:01:01 np0005549475.novalocal anacron[4155]: Jobs will be executed sequentially
Dec 07 09:01:01 np0005549475.novalocal run-parts[4157]: (/etc/cron.hourly) finished 0anacron
Dec 07 09:01:01 np0005549475.novalocal CROND[4143]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 07 09:01:01 np0005549475.novalocal dracut[1303]: *** Squashing the files inside the initramfs done ***
Dec 07 09:01:01 np0005549475.novalocal dracut[1303]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec 07 09:01:01 np0005549475.novalocal dracut[1303]: *** Hardlinking files ***
Dec 07 09:01:01 np0005549475.novalocal dracut[1303]: Mode:           real
Dec 07 09:01:01 np0005549475.novalocal dracut[1303]: Files:          50
Dec 07 09:01:01 np0005549475.novalocal dracut[1303]: Linked:         0 files
Dec 07 09:01:01 np0005549475.novalocal dracut[1303]: Compared:       0 xattrs
Dec 07 09:01:01 np0005549475.novalocal dracut[1303]: Compared:       0 files
Dec 07 09:01:01 np0005549475.novalocal dracut[1303]: Saved:          0 B
Dec 07 09:01:01 np0005549475.novalocal dracut[1303]: Duration:       0.000836 seconds
Dec 07 09:01:01 np0005549475.novalocal dracut[1303]: *** Hardlinking files done ***
Dec 07 09:01:02 np0005549475.novalocal dracut[1303]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec 07 09:01:02 np0005549475.novalocal kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Dec 07 09:01:02 np0005549475.novalocal kdumpctl[1016]: kdump: Starting kdump: [OK]
Dec 07 09:01:02 np0005549475.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 07 09:01:02 np0005549475.novalocal systemd[1]: Startup finished in 1.572s (kernel) + 2.820s (initrd) + 19.178s (userspace) = 23.571s.
Dec 07 09:01:16 np0005549475.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 07 09:01:24 np0005549475.novalocal sshd-session[4312]: Accepted publickey for zuul from 38.102.83.114 port 54202 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 07 09:01:24 np0005549475.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 07 09:01:24 np0005549475.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 07 09:01:24 np0005549475.novalocal systemd-logind[796]: New session 1 of user zuul.
Dec 07 09:01:24 np0005549475.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 07 09:01:24 np0005549475.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Queued start job for default target Main User Target.
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Created slice User Application Slice.
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Started Daily Cleanup of User's Temporary Directories.
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Reached target Paths.
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Reached target Timers.
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Starting D-Bus User Message Bus Socket...
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Starting Create User's Volatile Files and Directories...
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Finished Create User's Volatile Files and Directories.
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Listening on D-Bus User Message Bus Socket.
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Reached target Sockets.
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Reached target Basic System.
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Reached target Main User Target.
Dec 07 09:01:24 np0005549475.novalocal systemd[4316]: Startup finished in 165ms.
Dec 07 09:01:24 np0005549475.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 07 09:01:24 np0005549475.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 07 09:01:24 np0005549475.novalocal sshd-session[4312]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:01:25 np0005549475.novalocal python3[4398]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:01:28 np0005549475.novalocal python3[4426]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:01:36 np0005549475.novalocal python3[4484]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:01:37 np0005549475.novalocal python3[4524]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 07 09:01:39 np0005549475.novalocal python3[4550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkQcJK9AckSLB8kRJpoBvjJlvdUM1NPOv6gh0ztTck1XCwf9cQ7K6FgbgW5Zk5QtpT2Bskyk11uc8i8c2H7S/TLAvuLME63JPzSCN4U+cOYMO66ItZhTrMa8L3fJT6S2czxsCrc3UibOY/sgobMkVnTmivIl06HznGPkKZo4Vk3Pi6+wpDXgoav0MRspeRyuteMK3loUZjYiCGyQ89o0q92X6j4eA/8+lulbNsk3A+jgjjDfevRwHrl2J9/AJjxjHcK3Z2ZeCUvL89HwqGBIcuc7rrUfMRGP4ffy9GrNlMVOWz1TxigfyNSLFnmbR3B61MrGnlsygl3l+TroIGJhPvioZx2GFfCZ+oy9Loz3KObdiKDhHEVJkjFrFUeWmTpVnLursJhZOkKKQRZXtpk+klCh6rT0/LBH1X97OuWKikCL/fEXsTM3OdQ88ahIrCanC3ox9MqZCU1b3l16zHWyU8l5D42mYN79XxFZD+xD4kH1poO/KlY+66bM73wx3JNIM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:39 np0005549475.novalocal python3[4574]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:01:40 np0005549475.novalocal python3[4673]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:01:40 np0005549475.novalocal python3[4744]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765098100.0129445-252-260906243681670/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=b47cd6d8bda54cadb213ff8da60cb142_id_rsa follow=False checksum=16b4efc491a0b7940e21a1d94a54c06d2c2a7618 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:01:41 np0005549475.novalocal python3[4867]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:01:41 np0005549475.novalocal python3[4938]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765098100.9971385-307-206669298366379/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=b47cd6d8bda54cadb213ff8da60cb142_id_rsa.pub follow=False checksum=09ed89e17a15aaae00313e3fe40cedf6270ab77f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:01:43 np0005549475.novalocal python3[4986]: ansible-ping Invoked with data=pong
Dec 07 09:01:44 np0005549475.novalocal python3[5010]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:01:47 np0005549475.novalocal python3[5068]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 07 09:01:48 np0005549475.novalocal python3[5100]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:01:48 np0005549475.novalocal python3[5124]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:01:49 np0005549475.novalocal python3[5148]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:01:49 np0005549475.novalocal python3[5172]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:01:49 np0005549475.novalocal python3[5196]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:01:49 np0005549475.novalocal python3[5220]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:01:51 np0005549475.novalocal sudo[5244]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzilcaugvnqybvmslxuzyrvejjaoghoe ; /usr/bin/python3'
Dec 07 09:01:51 np0005549475.novalocal sudo[5244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:01:51 np0005549475.novalocal python3[5246]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:01:51 np0005549475.novalocal sudo[5244]: pam_unix(sudo:session): session closed for user root
Dec 07 09:01:52 np0005549475.novalocal sudo[5322]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmamhisyluoleydlwigogsgwhgahjwyy ; /usr/bin/python3'
Dec 07 09:01:52 np0005549475.novalocal sudo[5322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:01:52 np0005549475.novalocal python3[5324]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:01:52 np0005549475.novalocal sudo[5322]: pam_unix(sudo:session): session closed for user root
Dec 07 09:01:52 np0005549475.novalocal sudo[5395]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbjknvocvnpkndzpgvslhuevwwnjijoz ; /usr/bin/python3'
Dec 07 09:01:52 np0005549475.novalocal sudo[5395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:01:53 np0005549475.novalocal python3[5397]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765098112.0528674-32-18969955231191/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:01:53 np0005549475.novalocal sudo[5395]: pam_unix(sudo:session): session closed for user root
Dec 07 09:01:53 np0005549475.novalocal python3[5445]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:54 np0005549475.novalocal python3[5469]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:54 np0005549475.novalocal python3[5493]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:54 np0005549475.novalocal python3[5517]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:54 np0005549475.novalocal python3[5541]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:55 np0005549475.novalocal python3[5565]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:55 np0005549475.novalocal python3[5589]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:55 np0005549475.novalocal python3[5613]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:56 np0005549475.novalocal python3[5637]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:56 np0005549475.novalocal python3[5661]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:56 np0005549475.novalocal python3[5685]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:57 np0005549475.novalocal python3[5709]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:57 np0005549475.novalocal python3[5733]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:57 np0005549475.novalocal python3[5757]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:57 np0005549475.novalocal python3[5781]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:58 np0005549475.novalocal python3[5805]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:58 np0005549475.novalocal python3[5829]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:58 np0005549475.novalocal python3[5853]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:59 np0005549475.novalocal python3[5877]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:59 np0005549475.novalocal python3[5901]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:01:59 np0005549475.novalocal python3[5925]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:02:00 np0005549475.novalocal python3[5949]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:02:00 np0005549475.novalocal python3[5973]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:02:00 np0005549475.novalocal python3[5997]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:02:00 np0005549475.novalocal python3[6021]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:02:01 np0005549475.novalocal python3[6045]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:02:03 np0005549475.novalocal sudo[6069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okrirhonzulbtigbxjpgwbdyucvfzztv ; /usr/bin/python3'
Dec 07 09:02:03 np0005549475.novalocal sudo[6069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:02:03 np0005549475.novalocal python3[6071]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 07 09:02:03 np0005549475.novalocal systemd[1]: Starting Time & Date Service...
Dec 07 09:02:03 np0005549475.novalocal systemd[1]: Started Time & Date Service.
Dec 07 09:02:04 np0005549475.novalocal systemd-timedated[6073]: Changed time zone to 'UTC' (UTC).
Dec 07 09:02:04 np0005549475.novalocal sudo[6069]: pam_unix(sudo:session): session closed for user root
Dec 07 09:02:05 np0005549475.novalocal sudo[6100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbnadhswkjwvworzqlkekqsyehisyyzn ; /usr/bin/python3'
Dec 07 09:02:05 np0005549475.novalocal sudo[6100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:02:05 np0005549475.novalocal python3[6102]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:02:05 np0005549475.novalocal sudo[6100]: pam_unix(sudo:session): session closed for user root
Dec 07 09:02:05 np0005549475.novalocal python3[6178]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:02:05 np0005549475.novalocal python3[6249]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765098125.3841798-253-202370563454937/source _original_basename=tmpjd46tdk0 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:02:06 np0005549475.novalocal python3[6349]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:02:07 np0005549475.novalocal python3[6420]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765098126.3693712-302-256254870777654/source _original_basename=tmp_hb94khw follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:02:07 np0005549475.novalocal sudo[6520]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-julwonlpebstqtwecynlbzusiqwgolot ; /usr/bin/python3'
Dec 07 09:02:07 np0005549475.novalocal sudo[6520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:02:08 np0005549475.novalocal python3[6522]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:02:08 np0005549475.novalocal sudo[6520]: pam_unix(sudo:session): session closed for user root
Dec 07 09:02:08 np0005549475.novalocal sudo[6593]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcjfrhoqmbxgsndtyrewygcnxxnlveqm ; /usr/bin/python3'
Dec 07 09:02:08 np0005549475.novalocal sudo[6593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:02:08 np0005549475.novalocal python3[6595]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765098127.7742436-382-140344401303017/source _original_basename=tmplnui6jaq follow=False checksum=6c462e10cf6b935fb22f4386c31d576dcf4d4133 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:02:08 np0005549475.novalocal sudo[6593]: pam_unix(sudo:session): session closed for user root
Dec 07 09:02:09 np0005549475.novalocal python3[6643]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:02:09 np0005549475.novalocal python3[6669]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:02:09 np0005549475.novalocal sudo[6747]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdpcncmeavltdmafjirmaqrjwmbpbgst ; /usr/bin/python3'
Dec 07 09:02:09 np0005549475.novalocal sudo[6747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:02:09 np0005549475.novalocal python3[6749]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:02:09 np0005549475.novalocal sudo[6747]: pam_unix(sudo:session): session closed for user root
Dec 07 09:02:10 np0005549475.novalocal sudo[6820]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkziqffgsrjfqtyprensolfdasdzafto ; /usr/bin/python3'
Dec 07 09:02:10 np0005549475.novalocal sudo[6820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:02:10 np0005549475.novalocal python3[6822]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765098129.5503047-453-38785797556048/source _original_basename=tmp80bwsolo follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:02:10 np0005549475.novalocal sudo[6820]: pam_unix(sudo:session): session closed for user root
Dec 07 09:02:10 np0005549475.novalocal sudo[6871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzjnqzdrfzsmmytxamplgkemhhtyqyie ; /usr/bin/python3'
Dec 07 09:02:10 np0005549475.novalocal sudo[6871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:02:11 np0005549475.novalocal python3[6873]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-b378-1a30-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:02:11 np0005549475.novalocal sudo[6871]: pam_unix(sudo:session): session closed for user root
Dec 07 09:02:11 np0005549475.novalocal python3[6901]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-b378-1a30-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 07 09:02:13 np0005549475.novalocal python3[6929]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:02:30 np0005549475.novalocal sudo[6953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imwehafhpibjyihwdjrxiszecvbwpily ; /usr/bin/python3'
Dec 07 09:02:30 np0005549475.novalocal sudo[6953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:02:30 np0005549475.novalocal python3[6955]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:02:30 np0005549475.novalocal sudo[6953]: pam_unix(sudo:session): session closed for user root
Dec 07 09:02:34 np0005549475.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 07 09:03:30 np0005549475.novalocal sshd-session[4325]: Received disconnect from 38.102.83.114 port 54202:11: disconnected by user
Dec 07 09:03:30 np0005549475.novalocal sshd-session[4325]: Disconnected from user zuul 38.102.83.114 port 54202
Dec 07 09:03:30 np0005549475.novalocal sshd-session[4312]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:03:30 np0005549475.novalocal systemd-logind[796]: Session 1 logged out. Waiting for processes to exit.
Dec 07 09:03:40 np0005549475.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 07 09:03:40 np0005549475.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 07 09:03:40 np0005549475.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 07 09:03:40 np0005549475.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 07 09:03:40 np0005549475.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 07 09:03:40 np0005549475.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 07 09:03:40 np0005549475.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 07 09:03:40 np0005549475.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 07 09:03:40 np0005549475.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 07 09:03:40 np0005549475.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 07 09:03:40 np0005549475.novalocal NetworkManager[857]: <info>  [1765098220.2203] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 07 09:03:40 np0005549475.novalocal systemd-udevd[6959]: Network interface NamePolicy= disabled on kernel command line.
Dec 07 09:03:40 np0005549475.novalocal NetworkManager[857]: <info>  [1765098220.2448] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:03:40 np0005549475.novalocal NetworkManager[857]: <info>  [1765098220.2478] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 07 09:03:40 np0005549475.novalocal NetworkManager[857]: <info>  [1765098220.2482] device (eth1): carrier: link connected
Dec 07 09:03:40 np0005549475.novalocal NetworkManager[857]: <info>  [1765098220.2484] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 07 09:03:40 np0005549475.novalocal NetworkManager[857]: <info>  [1765098220.2490] policy: auto-activating connection 'Wired connection 1' (3cf2197f-10b3-3a0c-8bb4-f9a7144ab181)
Dec 07 09:03:40 np0005549475.novalocal NetworkManager[857]: <info>  [1765098220.2494] device (eth1): Activation: starting connection 'Wired connection 1' (3cf2197f-10b3-3a0c-8bb4-f9a7144ab181)
Dec 07 09:03:40 np0005549475.novalocal NetworkManager[857]: <info>  [1765098220.2495] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:03:40 np0005549475.novalocal NetworkManager[857]: <info>  [1765098220.2499] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:03:40 np0005549475.novalocal NetworkManager[857]: <info>  [1765098220.2503] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:03:40 np0005549475.novalocal NetworkManager[857]: <info>  [1765098220.2508] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 07 09:03:40 np0005549475.novalocal systemd[4316]: Starting Mark boot as successful...
Dec 07 09:03:40 np0005549475.novalocal systemd[4316]: Finished Mark boot as successful.
Dec 07 09:03:41 np0005549475.novalocal sshd-session[6963]: Accepted publickey for zuul from 38.102.83.114 port 55532 ssh2: RSA SHA256:hct83ililSSWAsGgD0ULsAQ0r1pHbrJ2CU75MFgoHRo
Dec 07 09:03:41 np0005549475.novalocal systemd-logind[796]: New session 3 of user zuul.
Dec 07 09:03:41 np0005549475.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 07 09:03:41 np0005549475.novalocal sshd-session[6963]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:03:41 np0005549475.novalocal python3[6990]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-692e-34c8-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:03:51 np0005549475.novalocal sudo[7068]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlsklmfsawcgkctqiwsegjlgnjoihzxk ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 07 09:03:51 np0005549475.novalocal sudo[7068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:03:51 np0005549475.novalocal python3[7070]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:03:51 np0005549475.novalocal sudo[7068]: pam_unix(sudo:session): session closed for user root
Dec 07 09:03:51 np0005549475.novalocal sudo[7141]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeowlkmsnwtawfcrppznlekfcppwdwvh ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 07 09:03:51 np0005549475.novalocal sudo[7141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:03:52 np0005549475.novalocal python3[7143]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765098231.2525275-155-64259071660889/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=473a0a55f6604beb6002a0bba602d70342b40af6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:03:52 np0005549475.novalocal sudo[7141]: pam_unix(sudo:session): session closed for user root
Dec 07 09:03:52 np0005549475.novalocal sudo[7191]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnsynkzwgvwbctfmbuhqtfcnoxkqhsmc ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 07 09:03:52 np0005549475.novalocal sudo[7191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:03:52 np0005549475.novalocal python3[7193]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[857]: <info>  [1765098232.6813] caught SIGTERM, shutting down normally.
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: Stopping Network Manager...
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[857]: <info>  [1765098232.6830] dhcp4 (eth0): canceled DHCP transaction
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[857]: <info>  [1765098232.6830] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[857]: <info>  [1765098232.6830] dhcp4 (eth0): state changed no lease
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[857]: <info>  [1765098232.6833] manager: NetworkManager state is now CONNECTING
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[857]: <info>  [1765098232.6946] dhcp4 (eth1): canceled DHCP transaction
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[857]: <info>  [1765098232.6947] dhcp4 (eth1): state changed no lease
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[857]: <info>  [1765098232.7033] exiting (success)
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: Stopped Network Manager.
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: NetworkManager.service: Consumed 1.286s CPU time, 10.0M memory peak.
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: Starting Network Manager...
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.7811] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:7c26b365-f356-47da-bd1a-0ae584570406)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.7812] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.7873] manager[0x565262228070]: monitoring kernel firmware directory '/lib/firmware'.
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: Starting Hostname Service...
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: Started Hostname Service.
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9053] hostname: hostname: using hostnamed
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9054] hostname: static hostname changed from (none) to "np0005549475.novalocal"
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9059] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9063] manager[0x565262228070]: rfkill: Wi-Fi hardware radio set enabled
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9064] manager[0x565262228070]: rfkill: WWAN hardware radio set enabled
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9088] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9088] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9088] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9089] manager: Networking is enabled by state file
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9091] settings: Loaded settings plugin: keyfile (internal)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9094] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9117] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9125] dhcp: init: Using DHCP client 'internal'
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9128] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9132] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9137] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9145] device (lo): Activation: starting connection 'lo' (9cb2e8de-c1b3-45af-836f-38efb3fd24ca)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9151] device (eth0): carrier: link connected
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9154] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9158] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9159] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9164] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9171] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9177] device (eth1): carrier: link connected
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9180] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9185] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3cf2197f-10b3-3a0c-8bb4-f9a7144ab181) (indicated)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9185] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9190] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9196] device (eth1): Activation: starting connection 'Wired connection 1' (3cf2197f-10b3-3a0c-8bb4-f9a7144ab181)
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: Started Network Manager.
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9202] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9206] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9208] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9209] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9212] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9215] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9217] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9231] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9242] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9257] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9263] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9279] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9283] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 07 09:03:52 np0005549475.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9304] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9310] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9318] device (lo): Activation: successful, device activated.
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9331] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9341] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9437] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9486] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9489] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9498] manager: NetworkManager state is now CONNECTED_SITE
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9507] device (eth0): Activation: successful, device activated.
Dec 07 09:03:52 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098232.9515] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 07 09:03:52 np0005549475.novalocal sudo[7191]: pam_unix(sudo:session): session closed for user root
Dec 07 09:03:53 np0005549475.novalocal python3[7277]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-692e-34c8-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:04:03 np0005549475.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 07 09:04:22 np0005549475.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.3515] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 07 09:04:38 np0005549475.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 07 09:04:38 np0005549475.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.3756] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.3758] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.3766] device (eth1): Activation: successful, device activated.
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.3781] manager: startup complete
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.3785] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <warn>  [1765098278.3797] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.3808] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 07 09:04:38 np0005549475.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.3986] dhcp4 (eth1): canceled DHCP transaction
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.3987] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.3988] dhcp4 (eth1): state changed no lease
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.4008] policy: auto-activating connection 'ci-private-network' (9290e757-2102-5044-b397-b83b445ce6e1)
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.4017] device (eth1): Activation: starting connection 'ci-private-network' (9290e757-2102-5044-b397-b83b445ce6e1)
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.4019] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.4023] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.4033] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.4046] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.4098] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.4100] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:04:38 np0005549475.novalocal NetworkManager[7202]: <info>  [1765098278.4108] device (eth1): Activation: successful, device activated.
Dec 07 09:04:48 np0005549475.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 07 09:04:53 np0005549475.novalocal sshd-session[6966]: Received disconnect from 38.102.83.114 port 55532:11: disconnected by user
Dec 07 09:04:53 np0005549475.novalocal sshd-session[6966]: Disconnected from user zuul 38.102.83.114 port 55532
Dec 07 09:04:53 np0005549475.novalocal sshd-session[6963]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:04:53 np0005549475.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 07 09:04:53 np0005549475.novalocal systemd[1]: session-3.scope: Consumed 1.816s CPU time.
Dec 07 09:04:53 np0005549475.novalocal systemd-logind[796]: Session 3 logged out. Waiting for processes to exit.
Dec 07 09:04:53 np0005549475.novalocal systemd-logind[796]: Removed session 3.
Dec 07 09:05:28 np0005549475.novalocal sshd-session[7305]: Accepted publickey for zuul from 38.102.83.114 port 50674 ssh2: RSA SHA256:hct83ililSSWAsGgD0ULsAQ0r1pHbrJ2CU75MFgoHRo
Dec 07 09:05:28 np0005549475.novalocal systemd-logind[796]: New session 4 of user zuul.
Dec 07 09:05:28 np0005549475.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 07 09:05:28 np0005549475.novalocal sshd-session[7305]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:05:28 np0005549475.novalocal sudo[7384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfcvdodhpxuwcsmjjbmqgvjdoazfhlww ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 07 09:05:28 np0005549475.novalocal sudo[7384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:05:28 np0005549475.novalocal python3[7386]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:05:28 np0005549475.novalocal sudo[7384]: pam_unix(sudo:session): session closed for user root
Dec 07 09:05:29 np0005549475.novalocal sudo[7457]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbfcgpcapvxgzgflcwcyntjtqvqxfdsr ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 07 09:05:29 np0005549475.novalocal sudo[7457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:05:29 np0005549475.novalocal python3[7459]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765098328.6026716-373-253333197824493/source _original_basename=tmpmqgf8zaa follow=False checksum=57bca5a761f595fa34860f9325990c87e5f7eb2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:05:29 np0005549475.novalocal sudo[7457]: pam_unix(sudo:session): session closed for user root
Dec 07 09:05:32 np0005549475.novalocal sshd-session[7308]: Connection closed by 38.102.83.114 port 50674
Dec 07 09:05:32 np0005549475.novalocal sshd-session[7305]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:05:32 np0005549475.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 07 09:05:32 np0005549475.novalocal systemd-logind[796]: Session 4 logged out. Waiting for processes to exit.
Dec 07 09:05:32 np0005549475.novalocal systemd-logind[796]: Removed session 4.
Dec 07 09:07:15 np0005549475.novalocal systemd[4316]: Created slice User Background Tasks Slice.
Dec 07 09:07:15 np0005549475.novalocal systemd[4316]: Starting Cleanup of User's Temporary Files and Directories...
Dec 07 09:07:15 np0005549475.novalocal systemd[4316]: Finished Cleanup of User's Temporary Files and Directories.
Dec 07 09:10:24 np0005549475.novalocal sshd-session[7489]: Connection closed by 66.240.236.116 port 43162 [preauth]
Dec 07 09:12:49 np0005549475.novalocal sshd-session[7493]: Accepted publickey for zuul from 38.102.83.114 port 55796 ssh2: RSA SHA256:hct83ililSSWAsGgD0ULsAQ0r1pHbrJ2CU75MFgoHRo
Dec 07 09:12:49 np0005549475.novalocal systemd-logind[796]: New session 5 of user zuul.
Dec 07 09:12:49 np0005549475.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 07 09:12:49 np0005549475.novalocal sshd-session[7493]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:12:49 np0005549475.novalocal sudo[7520]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcfnihngzneahsuynekaflfruxtjfchq ; /usr/bin/python3'
Dec 07 09:12:49 np0005549475.novalocal sudo[7520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:50 np0005549475.novalocal python3[7522]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-3a55-99f4-000000001cea-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:12:50 np0005549475.novalocal sudo[7520]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:50 np0005549475.novalocal sudo[7548]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uogbmdlpeimaduafpsrglufeisgikgqs ; /usr/bin/python3'
Dec 07 09:12:50 np0005549475.novalocal sudo[7548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:50 np0005549475.novalocal python3[7550]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:12:50 np0005549475.novalocal sudo[7548]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:50 np0005549475.novalocal sudo[7575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgcbuidxcomdseludqacwcojjgntofvt ; /usr/bin/python3'
Dec 07 09:12:50 np0005549475.novalocal sudo[7575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:50 np0005549475.novalocal python3[7577]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:12:50 np0005549475.novalocal sudo[7575]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:50 np0005549475.novalocal sudo[7601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhlbrppvgnmmdvgayzlllbsfjmvtpzln ; /usr/bin/python3'
Dec 07 09:12:50 np0005549475.novalocal sudo[7601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:51 np0005549475.novalocal python3[7603]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:12:51 np0005549475.novalocal sudo[7601]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:51 np0005549475.novalocal sudo[7627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grusgqapbgakkpshoibkbyfqinxfmdyz ; /usr/bin/python3'
Dec 07 09:12:51 np0005549475.novalocal sudo[7627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:51 np0005549475.novalocal python3[7629]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:12:51 np0005549475.novalocal sudo[7627]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:51 np0005549475.novalocal sudo[7653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scctplwswvlggekaowrrvitkksbbxgdo ; /usr/bin/python3'
Dec 07 09:12:51 np0005549475.novalocal sudo[7653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:51 np0005549475.novalocal python3[7655]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:12:51 np0005549475.novalocal sudo[7653]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:52 np0005549475.novalocal sudo[7731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hknrjepwejrbrpfdmmbdxnaeizyzhbko ; /usr/bin/python3'
Dec 07 09:12:52 np0005549475.novalocal sudo[7731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:52 np0005549475.novalocal python3[7733]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:12:52 np0005549475.novalocal sudo[7731]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:52 np0005549475.novalocal sudo[7804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weacptswmkqxnkirrffahnxhcfvstnvt ; /usr/bin/python3'
Dec 07 09:12:52 np0005549475.novalocal sudo[7804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:52 np0005549475.novalocal python3[7806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765098772.0880635-519-11423962552536/source _original_basename=tmpb9g6bjif follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:12:52 np0005549475.novalocal sudo[7804]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:53 np0005549475.novalocal sudo[7854]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuotiotjpnccvgjfgaelwevppmraents ; /usr/bin/python3'
Dec 07 09:12:53 np0005549475.novalocal sudo[7854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:53 np0005549475.novalocal python3[7856]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 07 09:12:53 np0005549475.novalocal systemd[1]: Reloading.
Dec 07 09:12:53 np0005549475.novalocal systemd-rc-local-generator[7877]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:12:53 np0005549475.novalocal sudo[7854]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:55 np0005549475.novalocal sudo[7910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbznobthuqpqcznjgtbapkggcgoslnru ; /usr/bin/python3'
Dec 07 09:12:55 np0005549475.novalocal sudo[7910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:55 np0005549475.novalocal python3[7912]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 07 09:12:55 np0005549475.novalocal sudo[7910]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:55 np0005549475.novalocal sudo[7936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvztcnkqnkomjjfplwkdzsichuzzfxku ; /usr/bin/python3'
Dec 07 09:12:55 np0005549475.novalocal sudo[7936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:55 np0005549475.novalocal python3[7938]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:12:55 np0005549475.novalocal sudo[7936]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:56 np0005549475.novalocal sudo[7964]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhagffgildyezstfccntlkdevlpmkwjl ; /usr/bin/python3'
Dec 07 09:12:56 np0005549475.novalocal sudo[7964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:56 np0005549475.novalocal python3[7966]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:12:56 np0005549475.novalocal sudo[7964]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:56 np0005549475.novalocal sudo[7992]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imrnpeazzurclfucxfpzykhncfsusngo ; /usr/bin/python3'
Dec 07 09:12:56 np0005549475.novalocal sudo[7992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:56 np0005549475.novalocal python3[7994]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:12:56 np0005549475.novalocal sudo[7992]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:56 np0005549475.novalocal sudo[8020]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfmuxppthavwcxlrujkgjpaiokpxzzyt ; /usr/bin/python3'
Dec 07 09:12:56 np0005549475.novalocal sudo[8020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:12:56 np0005549475.novalocal python3[8022]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:12:56 np0005549475.novalocal sudo[8020]: pam_unix(sudo:session): session closed for user root
Dec 07 09:12:57 np0005549475.novalocal python3[8049]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-3a55-99f4-000000001cf1-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:12:57 np0005549475.novalocal python3[8079]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 07 09:13:00 np0005549475.novalocal sshd-session[7496]: Connection closed by 38.102.83.114 port 55796
Dec 07 09:13:00 np0005549475.novalocal sshd-session[7493]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:13:00 np0005549475.novalocal systemd-logind[796]: Session 5 logged out. Waiting for processes to exit.
Dec 07 09:13:00 np0005549475.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 07 09:13:00 np0005549475.novalocal systemd[1]: session-5.scope: Consumed 4.680s CPU time.
Dec 07 09:13:00 np0005549475.novalocal systemd-logind[796]: Removed session 5.
Dec 07 09:13:02 np0005549475.novalocal sshd-session[8084]: Accepted publickey for zuul from 38.102.83.114 port 56514 ssh2: RSA SHA256:hct83ililSSWAsGgD0ULsAQ0r1pHbrJ2CU75MFgoHRo
Dec 07 09:13:02 np0005549475.novalocal systemd-logind[796]: New session 6 of user zuul.
Dec 07 09:13:02 np0005549475.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 07 09:13:02 np0005549475.novalocal sshd-session[8084]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:13:02 np0005549475.novalocal sudo[8111]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvzkezizhxusttqjcgxqyoqrgkqduxwh ; /usr/bin/python3'
Dec 07 09:13:02 np0005549475.novalocal sudo[8111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:13:02 np0005549475.novalocal python3[8113]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 07 09:13:05 np0005549475.novalocal irqbalance[786]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 07 09:13:05 np0005549475.novalocal irqbalance[786]: IRQ 27 affinity is now unmanaged
Dec 07 09:13:17 np0005549475.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 07 09:13:17 np0005549475.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 07 09:13:17 np0005549475.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 07 09:13:17 np0005549475.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 07 09:13:17 np0005549475.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 07 09:13:17 np0005549475.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 07 09:13:17 np0005549475.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 07 09:13:17 np0005549475.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 07 09:13:26 np0005549475.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 07 09:13:26 np0005549475.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 07 09:13:26 np0005549475.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 07 09:13:26 np0005549475.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 07 09:13:26 np0005549475.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 07 09:13:26 np0005549475.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 07 09:13:26 np0005549475.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 07 09:13:26 np0005549475.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 07 09:13:34 np0005549475.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 07 09:13:34 np0005549475.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 07 09:13:34 np0005549475.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 07 09:13:34 np0005549475.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 07 09:13:34 np0005549475.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 07 09:13:34 np0005549475.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 07 09:13:34 np0005549475.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 07 09:13:34 np0005549475.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 07 09:13:36 np0005549475.novalocal setsebool[8181]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 07 09:13:36 np0005549475.novalocal setsebool[8181]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 07 09:13:46 np0005549475.novalocal kernel: SELinux:  Converting 389 SID table entries...
Dec 07 09:13:46 np0005549475.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 07 09:13:46 np0005549475.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 07 09:13:46 np0005549475.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 07 09:13:46 np0005549475.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 07 09:13:46 np0005549475.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 07 09:13:46 np0005549475.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 07 09:13:46 np0005549475.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 07 09:14:04 np0005549475.novalocal dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 07 09:14:04 np0005549475.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 07 09:14:04 np0005549475.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 07 09:14:04 np0005549475.novalocal systemd[1]: Reloading.
Dec 07 09:14:04 np0005549475.novalocal systemd-rc-local-generator[8938]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:14:04 np0005549475.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 07 09:14:05 np0005549475.novalocal sudo[8111]: pam_unix(sudo:session): session closed for user root
Dec 07 09:14:10 np0005549475.novalocal python3[12837]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-6949-57be-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:14:10 np0005549475.novalocal kernel: evm: overlay not supported
Dec 07 09:14:10 np0005549475.novalocal systemd[4316]: Starting D-Bus User Message Bus...
Dec 07 09:14:10 np0005549475.novalocal dbus-broker-launch[13610]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 07 09:14:10 np0005549475.novalocal dbus-broker-launch[13610]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 07 09:14:10 np0005549475.novalocal systemd[4316]: Started D-Bus User Message Bus.
Dec 07 09:14:10 np0005549475.novalocal dbus-broker-lau[13610]: Ready
Dec 07 09:14:10 np0005549475.novalocal systemd[4316]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 07 09:14:10 np0005549475.novalocal systemd[4316]: Created slice Slice /user.
Dec 07 09:14:10 np0005549475.novalocal systemd[4316]: podman-13498.scope: unit configures an IP firewall, but not running as root.
Dec 07 09:14:10 np0005549475.novalocal systemd[4316]: (This warning is only shown for the first unit using IP firewalling.)
Dec 07 09:14:10 np0005549475.novalocal systemd[4316]: Started podman-13498.scope.
Dec 07 09:14:11 np0005549475.novalocal systemd[4316]: Started podman-pause-46c5f85e.scope.
Dec 07 09:14:11 np0005549475.novalocal sshd-session[8087]: Connection closed by 38.102.83.114 port 56514
Dec 07 09:14:11 np0005549475.novalocal sshd-session[8084]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:14:11 np0005549475.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Dec 07 09:14:11 np0005549475.novalocal systemd[1]: session-6.scope: Consumed 58.922s CPU time.
Dec 07 09:14:11 np0005549475.novalocal systemd-logind[796]: Session 6 logged out. Waiting for processes to exit.
Dec 07 09:14:11 np0005549475.novalocal systemd-logind[796]: Removed session 6.
Dec 07 09:14:25 np0005549475.novalocal sshd-session[19061]: Connection closed by 38.102.83.80 port 33108 [preauth]
Dec 07 09:14:25 np0005549475.novalocal sshd-session[19062]: Connection closed by 38.102.83.80 port 33120 [preauth]
Dec 07 09:14:25 np0005549475.novalocal sshd-session[19065]: Unable to negotiate with 38.102.83.80 port 33128: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 07 09:14:25 np0005549475.novalocal sshd-session[19068]: Unable to negotiate with 38.102.83.80 port 33136: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 07 09:14:25 np0005549475.novalocal sshd-session[19064]: Unable to negotiate with 38.102.83.80 port 33144: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 07 09:14:30 np0005549475.novalocal sshd-session[20767]: Accepted publickey for zuul from 38.102.83.114 port 46838 ssh2: RSA SHA256:hct83ililSSWAsGgD0ULsAQ0r1pHbrJ2CU75MFgoHRo
Dec 07 09:14:30 np0005549475.novalocal systemd-logind[796]: New session 7 of user zuul.
Dec 07 09:14:30 np0005549475.novalocal systemd[1]: Started Session 7 of User zuul.
Dec 07 09:14:30 np0005549475.novalocal sshd-session[20767]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:14:31 np0005549475.novalocal python3[20893]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPEBTBhBZP90LzstcRNaMEaJYA9StP5JdyPfNDHacfdtvJAhV3TPbWHNVN0Z+oo6KXJ9tO3/Fc2SBfhpFcx8Lls= zuul@np0005549473.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:14:31 np0005549475.novalocal sudo[21051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpmchjxkiqammizouchimgzyovzuluqh ; /usr/bin/python3'
Dec 07 09:14:31 np0005549475.novalocal sudo[21051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:14:31 np0005549475.novalocal python3[21061]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPEBTBhBZP90LzstcRNaMEaJYA9StP5JdyPfNDHacfdtvJAhV3TPbWHNVN0Z+oo6KXJ9tO3/Fc2SBfhpFcx8Lls= zuul@np0005549473.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:14:31 np0005549475.novalocal sudo[21051]: pam_unix(sudo:session): session closed for user root
Dec 07 09:14:32 np0005549475.novalocal sudo[21439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yevpuooooolgeetvihwleyktokermbkr ; /usr/bin/python3'
Dec 07 09:14:32 np0005549475.novalocal sudo[21439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:14:32 np0005549475.novalocal python3[21448]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005549475.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 07 09:14:32 np0005549475.novalocal useradd[21532]: new group: name=cloud-admin, GID=1002
Dec 07 09:14:32 np0005549475.novalocal useradd[21532]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 07 09:14:32 np0005549475.novalocal sudo[21439]: pam_unix(sudo:session): session closed for user root
Dec 07 09:14:32 np0005549475.novalocal sudo[21718]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvijmocusqwucvpbhjiydiitaemetbku ; /usr/bin/python3'
Dec 07 09:14:32 np0005549475.novalocal sudo[21718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:14:32 np0005549475.novalocal python3[21727]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPEBTBhBZP90LzstcRNaMEaJYA9StP5JdyPfNDHacfdtvJAhV3TPbWHNVN0Z+oo6KXJ9tO3/Fc2SBfhpFcx8Lls= zuul@np0005549473.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 07 09:14:32 np0005549475.novalocal sudo[21718]: pam_unix(sudo:session): session closed for user root
Dec 07 09:14:33 np0005549475.novalocal sudo[21991]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rducqjujittzpkywblnfhaolsfttbyjw ; /usr/bin/python3'
Dec 07 09:14:33 np0005549475.novalocal sudo[21991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:14:33 np0005549475.novalocal python3[22001]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:14:33 np0005549475.novalocal sudo[21991]: pam_unix(sudo:session): session closed for user root
Dec 07 09:14:33 np0005549475.novalocal sudo[22277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrpxxvkmrrlrenlddmgjfylxwiovtctd ; /usr/bin/python3'
Dec 07 09:14:33 np0005549475.novalocal sudo[22277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:14:33 np0005549475.novalocal python3[22288]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765098873.063579-151-120845066181852/source _original_basename=tmp_8kjdu22 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:14:33 np0005549475.novalocal sudo[22277]: pam_unix(sudo:session): session closed for user root
Dec 07 09:14:34 np0005549475.novalocal sudo[22597]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aloamyeparqslseqlysdewmjgesueixl ; /usr/bin/python3'
Dec 07 09:14:34 np0005549475.novalocal sudo[22597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:14:34 np0005549475.novalocal python3[22609]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Dec 07 09:14:34 np0005549475.novalocal systemd[1]: Starting Hostname Service...
Dec 07 09:14:34 np0005549475.novalocal systemd[1]: Started Hostname Service.
Dec 07 09:14:34 np0005549475.novalocal systemd-hostnamed[22693]: Changed pretty hostname to 'compute-1'
Dec 07 09:14:34 compute-1 systemd-hostnamed[22693]: Hostname set to <compute-1> (static)
Dec 07 09:14:34 compute-1 NetworkManager[7202]: <info>  [1765098874.8796] hostname: static hostname changed from "np0005549475.novalocal" to "compute-1"
Dec 07 09:14:34 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 07 09:14:34 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 07 09:14:34 compute-1 sudo[22597]: pam_unix(sudo:session): session closed for user root
Dec 07 09:14:35 compute-1 sshd-session[20834]: Connection closed by 38.102.83.114 port 46838
Dec 07 09:14:35 compute-1 sshd-session[20767]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:14:35 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Dec 07 09:14:35 compute-1 systemd[1]: session-7.scope: Consumed 2.302s CPU time.
Dec 07 09:14:35 compute-1 systemd-logind[796]: Session 7 logged out. Waiting for processes to exit.
Dec 07 09:14:35 compute-1 systemd-logind[796]: Removed session 7.
Dec 07 09:14:44 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 07 09:14:57 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 07 09:14:57 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 07 09:14:57 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1min 4.513s CPU time.
Dec 07 09:14:57 compute-1 systemd[1]: run-rb1a9606dda144be99f068d3b181c11ab.service: Deactivated successfully.
Dec 07 09:15:02 compute-1 sshd-session[29965]: Connection closed by authenticating user root 87.120.191.21 port 28392 [preauth]
Dec 07 09:15:04 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 07 09:16:01 compute-1 anacron[4155]: Job `cron.daily' started
Dec 07 09:16:01 compute-1 anacron[4155]: Job `cron.daily' terminated
Dec 07 09:16:01 compute-1 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 07 09:16:01 compute-1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 07 09:16:01 compute-1 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 07 09:16:01 compute-1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 07 09:18:10 compute-1 systemd[1]: Starting dnf makecache...
Dec 07 09:18:10 compute-1 sshd-session[29977]: Accepted publickey for zuul from 38.102.83.80 port 53380 ssh2: RSA SHA256:hct83ililSSWAsGgD0ULsAQ0r1pHbrJ2CU75MFgoHRo
Dec 07 09:18:10 compute-1 systemd-logind[796]: New session 8 of user zuul.
Dec 07 09:18:10 compute-1 systemd[1]: Started Session 8 of User zuul.
Dec 07 09:18:10 compute-1 sshd-session[29977]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:18:10 compute-1 dnf[29979]: Failed determining last makecache time.
Dec 07 09:18:10 compute-1 dnf[29979]: CentOS Stream 9 - BaseOS                         43 kB/s | 7.3 kB     00:00
Dec 07 09:18:10 compute-1 dnf[29979]: CentOS Stream 9 - AppStream                      75 kB/s | 7.4 kB     00:00
Dec 07 09:18:10 compute-1 python3[30055]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:18:11 compute-1 dnf[29979]: CentOS Stream 9 - CRB                            86 kB/s | 7.2 kB     00:00
Dec 07 09:18:11 compute-1 dnf[29979]: CentOS Stream 9 - Extras packages                79 kB/s | 8.3 kB     00:00
Dec 07 09:18:11 compute-1 dnf[29979]: Metadata cache created.
Dec 07 09:18:11 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 07 09:18:11 compute-1 systemd[1]: Finished dnf makecache.
Dec 07 09:18:12 compute-1 sudo[30172]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdfaxceuohfalefpflyxghfunzviywgl ; /usr/bin/python3'
Dec 07 09:18:12 compute-1 sudo[30172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:12 compute-1 python3[30174]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:18:12 compute-1 sudo[30172]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:13 compute-1 sudo[30245]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmdyexehmoxbdmilcsunhcejogicttuj ; /usr/bin/python3'
Dec 07 09:18:13 compute-1 sudo[30245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:13 compute-1 python3[30247]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4998693-33941-45750858921809/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:18:13 compute-1 sudo[30245]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:13 compute-1 sudo[30271]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzzboxidmsaewataqmxycubzibljbyzs ; /usr/bin/python3'
Dec 07 09:18:13 compute-1 sudo[30271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:13 compute-1 python3[30273]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:18:13 compute-1 sudo[30271]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:13 compute-1 sudo[30344]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kikrjpsuhxfyvfhqfoxuewfboqreuerd ; /usr/bin/python3'
Dec 07 09:18:13 compute-1 sudo[30344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:13 compute-1 python3[30346]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4998693-33941-45750858921809/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:18:14 compute-1 sudo[30344]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:14 compute-1 sudo[30370]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsbnlzncxewyewrtpztffzxssmbqfecy ; /usr/bin/python3'
Dec 07 09:18:14 compute-1 sudo[30370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:14 compute-1 python3[30372]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:18:14 compute-1 sudo[30370]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:14 compute-1 sudo[30443]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsdgeoeiaolgdueozlimtnfboswwdmhq ; /usr/bin/python3'
Dec 07 09:18:14 compute-1 sudo[30443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:14 compute-1 python3[30445]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4998693-33941-45750858921809/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:18:14 compute-1 sudo[30443]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:14 compute-1 sudo[30469]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svjsosidgmnemqveyskxfkzhwebsbkmz ; /usr/bin/python3'
Dec 07 09:18:14 compute-1 sudo[30469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:14 compute-1 python3[30471]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:18:14 compute-1 sudo[30469]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:15 compute-1 sudo[30542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvjemibxmapeltnhrlxabqxecemgkzfl ; /usr/bin/python3'
Dec 07 09:18:15 compute-1 sudo[30542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:15 compute-1 python3[30544]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4998693-33941-45750858921809/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:18:15 compute-1 sudo[30542]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:15 compute-1 sudo[30568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzubbwmqlfezrbbeisekzcnhbxftfknj ; /usr/bin/python3'
Dec 07 09:18:15 compute-1 sudo[30568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:15 compute-1 python3[30570]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:18:15 compute-1 sudo[30568]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:15 compute-1 sudo[30641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzejjzmuxzdrqugspeqdpkyrqhvxzhwe ; /usr/bin/python3'
Dec 07 09:18:15 compute-1 sudo[30641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:16 compute-1 python3[30643]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4998693-33941-45750858921809/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:18:16 compute-1 sudo[30641]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:16 compute-1 sudo[30667]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvgwoiovumiwimjlqwbpqutlbahmugqj ; /usr/bin/python3'
Dec 07 09:18:16 compute-1 sudo[30667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:16 compute-1 python3[30669]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:18:16 compute-1 sudo[30667]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:16 compute-1 sudo[30740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-balfrnibtvbdnkqroacpdfiusqfqsajj ; /usr/bin/python3'
Dec 07 09:18:16 compute-1 sudo[30740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:16 compute-1 python3[30742]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4998693-33941-45750858921809/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:18:16 compute-1 sudo[30740]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:16 compute-1 sudo[30766]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdgwjcdvxvzzhqalbesvjkpsjssvqksc ; /usr/bin/python3'
Dec 07 09:18:16 compute-1 sudo[30766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:17 compute-1 python3[30768]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:18:17 compute-1 sudo[30766]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:17 compute-1 sudo[30839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afuqonbdgzgsndccqnavlkhgvudwzqox ; /usr/bin/python3'
Dec 07 09:18:17 compute-1 sudo[30839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:18:17 compute-1 python3[30841]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765099092.4998693-33941-45750858921809/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:18:17 compute-1 sudo[30839]: pam_unix(sudo:session): session closed for user root
Dec 07 09:18:29 compute-1 python3[30890]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:19:21 compute-1 sshd-session[30893]: Connection closed by 167.94.138.124 port 48052 [preauth]
Dec 07 09:23:28 compute-1 sshd-session[29981]: Received disconnect from 38.102.83.80 port 53380:11: disconnected by user
Dec 07 09:23:28 compute-1 sshd-session[29981]: Disconnected from user zuul 38.102.83.80 port 53380
Dec 07 09:23:28 compute-1 sshd-session[29977]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:23:28 compute-1 systemd-logind[796]: Session 8 logged out. Waiting for processes to exit.
Dec 07 09:23:28 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Dec 07 09:23:28 compute-1 systemd[1]: session-8.scope: Consumed 5.986s CPU time.
Dec 07 09:23:28 compute-1 systemd-logind[796]: Removed session 8.
Dec 07 09:30:05 compute-1 sshd-session[30906]: Accepted publickey for zuul from 192.168.122.30 port 53332 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:30:05 compute-1 systemd-logind[796]: New session 9 of user zuul.
Dec 07 09:30:05 compute-1 systemd[1]: Started Session 9 of User zuul.
Dec 07 09:30:05 compute-1 sshd-session[30906]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:30:06 compute-1 python3.9[31059]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:30:08 compute-1 sudo[31238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufvnjjbfodxsoddwcrawlwjmurchbzmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099807.62344-57-95638212613104/AnsiballZ_command.py'
Dec 07 09:30:08 compute-1 sudo[31238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:30:08 compute-1 python3.9[31240]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:30:15 compute-1 sudo[31238]: pam_unix(sudo:session): session closed for user root
Dec 07 09:30:15 compute-1 sshd-session[30909]: Connection closed by 192.168.122.30 port 53332
Dec 07 09:30:15 compute-1 sshd-session[30906]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:30:15 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Dec 07 09:30:15 compute-1 systemd[1]: session-9.scope: Consumed 8.422s CPU time.
Dec 07 09:30:15 compute-1 systemd-logind[796]: Session 9 logged out. Waiting for processes to exit.
Dec 07 09:30:15 compute-1 systemd-logind[796]: Removed session 9.
Dec 07 09:30:31 compute-1 sshd-session[31299]: Accepted publickey for zuul from 192.168.122.30 port 54432 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:30:31 compute-1 systemd-logind[796]: New session 10 of user zuul.
Dec 07 09:30:31 compute-1 systemd[1]: Started Session 10 of User zuul.
Dec 07 09:30:31 compute-1 sshd-session[31299]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:30:31 compute-1 python3.9[31452]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 07 09:30:33 compute-1 python3.9[31626]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:30:33 compute-1 sudo[31776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gklwagtztfbxctxmnwcxrrhbhkpdvbds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099833.5352714-94-118743898666793/AnsiballZ_command.py'
Dec 07 09:30:33 compute-1 sudo[31776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:30:34 compute-1 python3.9[31778]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:30:34 compute-1 sudo[31776]: pam_unix(sudo:session): session closed for user root
Dec 07 09:30:35 compute-1 sudo[31929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwvrmfelbrhvzgnptthxquktjajoqxrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099834.7030308-130-141443270356029/AnsiballZ_stat.py'
Dec 07 09:30:35 compute-1 sudo[31929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:30:35 compute-1 python3.9[31931]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:30:35 compute-1 sudo[31929]: pam_unix(sudo:session): session closed for user root
Dec 07 09:30:36 compute-1 sudo[32081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifjaqdqfonbtyrsxbzzrwdfeymeftayt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099835.583609-154-135552864055536/AnsiballZ_file.py'
Dec 07 09:30:36 compute-1 sudo[32081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:30:36 compute-1 python3.9[32083]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:30:36 compute-1 sudo[32081]: pam_unix(sudo:session): session closed for user root
Dec 07 09:30:36 compute-1 sudo[32233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulhukvioyckdzrgduzsvqijcclnsiftt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099836.495349-178-124616905851595/AnsiballZ_stat.py'
Dec 07 09:30:36 compute-1 sudo[32233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:30:36 compute-1 python3.9[32235]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:30:36 compute-1 sudo[32233]: pam_unix(sudo:session): session closed for user root
Dec 07 09:30:37 compute-1 sudo[32356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrxwwjwoiahtamkgfqzuzqidvvuehens ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099836.495349-178-124616905851595/AnsiballZ_copy.py'
Dec 07 09:30:37 compute-1 sudo[32356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:30:37 compute-1 python3.9[32358]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765099836.495349-178-124616905851595/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:30:37 compute-1 sudo[32356]: pam_unix(sudo:session): session closed for user root
Dec 07 09:30:38 compute-1 sudo[32508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncngmfodsoxsszckqpzyukdohpzpusjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099837.9486167-223-216749804963954/AnsiballZ_setup.py'
Dec 07 09:30:38 compute-1 sudo[32508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:30:38 compute-1 python3.9[32510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:30:38 compute-1 sudo[32508]: pam_unix(sudo:session): session closed for user root
Dec 07 09:30:39 compute-1 sudo[32664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yguidxeidkvklbgvapknjdmraukczvez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099838.9827046-247-131952118592982/AnsiballZ_file.py'
Dec 07 09:30:39 compute-1 sudo[32664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:30:39 compute-1 python3.9[32666]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:30:39 compute-1 sudo[32664]: pam_unix(sudo:session): session closed for user root
Dec 07 09:30:40 compute-1 sudo[32816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddznphifmgolvwundnqdsidapbyqverm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099839.9955764-274-85409332711786/AnsiballZ_file.py'
Dec 07 09:30:40 compute-1 sudo[32816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:30:40 compute-1 python3.9[32818]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:30:40 compute-1 sudo[32816]: pam_unix(sudo:session): session closed for user root
Dec 07 09:30:41 compute-1 python3.9[32968]: ansible-ansible.builtin.service_facts Invoked
Dec 07 09:30:47 compute-1 python3.9[33221]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:30:48 compute-1 python3.9[33371]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:30:49 compute-1 python3.9[33525]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:30:50 compute-1 sudo[33681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhrcqsztdvbhtvvfpfjnfsdqkcqkdacj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099849.8840344-418-104156764352546/AnsiballZ_setup.py'
Dec 07 09:30:50 compute-1 sudo[33681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:30:50 compute-1 python3.9[33683]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:30:50 compute-1 sudo[33681]: pam_unix(sudo:session): session closed for user root
Dec 07 09:30:51 compute-1 sudo[33765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-synonplxcuikardautxldcrixfurpnqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099849.8840344-418-104156764352546/AnsiballZ_dnf.py'
Dec 07 09:30:51 compute-1 sudo[33765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:30:51 compute-1 python3.9[33767]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:31:33 compute-1 systemd[1]: Reloading.
Dec 07 09:31:33 compute-1 systemd-rc-local-generator[33966]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:31:33 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 07 09:31:33 compute-1 systemd[1]: Reloading.
Dec 07 09:31:33 compute-1 systemd-rc-local-generator[34006]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:31:34 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 07 09:31:34 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 07 09:31:34 compute-1 systemd[1]: Reloading.
Dec 07 09:31:34 compute-1 systemd-rc-local-generator[34042]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:31:34 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 07 09:31:34 compute-1 dbus-broker-launch[740]: Noticed file-system modification, trigger reload.
Dec 07 09:31:34 compute-1 dbus-broker-launch[740]: Noticed file-system modification, trigger reload.
Dec 07 09:32:37 compute-1 kernel: SELinux:  Converting 2718 SID table entries...
Dec 07 09:32:37 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 07 09:32:37 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 07 09:32:37 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 07 09:32:37 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 07 09:32:37 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 07 09:32:37 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 07 09:32:37 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 07 09:32:37 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 07 09:32:37 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 07 09:32:37 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 07 09:32:37 compute-1 systemd[1]: Reloading.
Dec 07 09:32:37 compute-1 systemd-rc-local-generator[34389]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:32:37 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 07 09:32:38 compute-1 sudo[33765]: pam_unix(sudo:session): session closed for user root
Dec 07 09:32:39 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 07 09:32:39 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 07 09:32:39 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.193s CPU time.
Dec 07 09:32:39 compute-1 systemd[1]: run-r14433b84a2c54e1db8cbe242f1ad2331.service: Deactivated successfully.
Dec 07 09:32:39 compute-1 sudo[35297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzrlbshkmzyrxxvscqsivphkfppsnzja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099959.5446632-455-250027762958877/AnsiballZ_command.py'
Dec 07 09:32:39 compute-1 sudo[35297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:32:40 compute-1 python3.9[35299]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:32:41 compute-1 sudo[35297]: pam_unix(sudo:session): session closed for user root
Dec 07 09:32:41 compute-1 sudo[35578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffcmhqzfrmpekuzqpvoyfayzydfbnzfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099961.3040276-478-101496413430713/AnsiballZ_selinux.py'
Dec 07 09:32:41 compute-1 sudo[35578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:32:42 compute-1 python3.9[35580]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 07 09:32:42 compute-1 sudo[35578]: pam_unix(sudo:session): session closed for user root
Dec 07 09:32:43 compute-1 sudo[35730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deqvolrqhlazeodbicldhdtdkpxjcigk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099962.7048554-511-63219270742394/AnsiballZ_command.py'
Dec 07 09:32:43 compute-1 sudo[35730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:32:43 compute-1 python3.9[35732]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 07 09:32:44 compute-1 sudo[35730]: pam_unix(sudo:session): session closed for user root
Dec 07 09:32:44 compute-1 sudo[35883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lstgocjzforwtrwfenejwclzxfohhrxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099964.4445386-535-5209067014761/AnsiballZ_file.py'
Dec 07 09:32:44 compute-1 sudo[35883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:32:46 compute-1 python3.9[35885]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:32:46 compute-1 sudo[35883]: pam_unix(sudo:session): session closed for user root
Dec 07 09:32:47 compute-1 sudo[36035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsuqzleclyzabljyhwteyfcojaoueuez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099967.622341-559-123661875589709/AnsiballZ_mount.py'
Dec 07 09:32:47 compute-1 sudo[36035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:32:48 compute-1 python3.9[36037]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 07 09:32:48 compute-1 sudo[36035]: pam_unix(sudo:session): session closed for user root
Dec 07 09:32:49 compute-1 sudo[36187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzzmxmpofszfoujazwurmknuckgvapnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099969.330136-644-164430084288461/AnsiballZ_file.py'
Dec 07 09:32:49 compute-1 sudo[36187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:32:51 compute-1 python3.9[36189]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:32:51 compute-1 sudo[36187]: pam_unix(sudo:session): session closed for user root
Dec 07 09:32:54 compute-1 sudo[36339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekitgxhwwesbzkdjqvtjvmrpnectbcml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099974.3177395-668-187858453485235/AnsiballZ_stat.py'
Dec 07 09:32:54 compute-1 sudo[36339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:32:59 compute-1 python3.9[36341]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:32:59 compute-1 sudo[36339]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:00 compute-1 sudo[36463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbvddfgnlowarrhbpxlitxkfnhkjliha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099974.3177395-668-187858453485235/AnsiballZ_copy.py'
Dec 07 09:33:00 compute-1 sudo[36463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:00 compute-1 python3.9[36465]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765099974.3177395-668-187858453485235/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:33:00 compute-1 sudo[36463]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:01 compute-1 sudo[36615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iodiqmgtombbtbsfasabpsijbylvhjli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099981.6601794-739-166667746777553/AnsiballZ_stat.py'
Dec 07 09:33:01 compute-1 sudo[36615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:02 compute-1 python3.9[36617]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:33:02 compute-1 sudo[36615]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:02 compute-1 sudo[36767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaknociqqiuwfnekyadkuvvsuzfldcqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099982.475196-763-115026685428398/AnsiballZ_command.py'
Dec 07 09:33:02 compute-1 sudo[36767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:03 compute-1 python3.9[36769]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:33:03 compute-1 sudo[36767]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:03 compute-1 sudo[36920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndmfughcxkkqpdqqaxidfcutupzekuhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099983.3329704-787-192119654887079/AnsiballZ_file.py'
Dec 07 09:33:03 compute-1 sudo[36920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:03 compute-1 python3.9[36922]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:33:03 compute-1 sudo[36920]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:04 compute-1 sudo[37072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quajfzzyfshjlwgzfdrkkswuxrqkobql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099984.3975525-820-249977998954347/AnsiballZ_getent.py'
Dec 07 09:33:04 compute-1 sudo[37072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:05 compute-1 python3.9[37074]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 07 09:33:05 compute-1 sudo[37072]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:05 compute-1 sudo[37225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kozqvzxmaihizngricghswqlmofltycx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099985.2920954-844-73013412170880/AnsiballZ_group.py'
Dec 07 09:33:05 compute-1 sudo[37225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:06 compute-1 python3.9[37227]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 07 09:33:06 compute-1 groupadd[37228]: group added to /etc/group: name=qemu, GID=107
Dec 07 09:33:06 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 09:33:06 compute-1 groupadd[37228]: group added to /etc/gshadow: name=qemu
Dec 07 09:33:06 compute-1 groupadd[37228]: new group: name=qemu, GID=107
Dec 07 09:33:06 compute-1 sudo[37225]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:06 compute-1 sudo[37384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfjjzuckrmxfverumioxoegylszfesjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099986.3134265-868-102990251455461/AnsiballZ_user.py'
Dec 07 09:33:06 compute-1 sudo[37384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:06 compute-1 python3.9[37386]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 07 09:33:06 compute-1 useradd[37388]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 07 09:33:07 compute-1 sudo[37384]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:07 compute-1 sudo[37544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yydhmlflqmmgbyfjxpzjjzzwuuvrfluc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099987.4065316-892-181915105830080/AnsiballZ_getent.py'
Dec 07 09:33:07 compute-1 sudo[37544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:07 compute-1 python3.9[37546]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 07 09:33:07 compute-1 sudo[37544]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:08 compute-1 sudo[37697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukcoxjlagqokajlidiydsbtuenxkheyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099988.1697276-916-154603901455739/AnsiballZ_group.py'
Dec 07 09:33:08 compute-1 sudo[37697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:08 compute-1 python3.9[37699]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 07 09:33:08 compute-1 groupadd[37700]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 07 09:33:08 compute-1 groupadd[37700]: group added to /etc/gshadow: name=hugetlbfs
Dec 07 09:33:08 compute-1 groupadd[37700]: new group: name=hugetlbfs, GID=42477
Dec 07 09:33:08 compute-1 sudo[37697]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:09 compute-1 sudo[37855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkxizudrubjikzkggkzfcmamhhvexscb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099989.1472063-943-150371813444431/AnsiballZ_file.py'
Dec 07 09:33:09 compute-1 sudo[37855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:09 compute-1 python3.9[37857]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 07 09:33:09 compute-1 sudo[37855]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:10 compute-1 sudo[38007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgkojzsucntkhbewiyjdgrqofvqdkyfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099990.2633867-976-200094723074489/AnsiballZ_dnf.py'
Dec 07 09:33:10 compute-1 sudo[38007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:10 compute-1 python3.9[38009]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:33:12 compute-1 sudo[38007]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:12 compute-1 sudo[38160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydskbstbbqhjcosbqucuplwokrmoojxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099992.6125216-1000-97834192144835/AnsiballZ_file.py'
Dec 07 09:33:12 compute-1 sudo[38160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:13 compute-1 python3.9[38162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:33:13 compute-1 sudo[38160]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:13 compute-1 sudo[38312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yevnpolferkkalvmalsjdlcnsedzuqkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099993.441988-1024-207626559697501/AnsiballZ_stat.py'
Dec 07 09:33:13 compute-1 sudo[38312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:13 compute-1 python3.9[38314]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:33:13 compute-1 sudo[38312]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:14 compute-1 sudo[38435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uukthubiqjocixfajufqcreilaplkppf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099993.441988-1024-207626559697501/AnsiballZ_copy.py'
Dec 07 09:33:14 compute-1 sudo[38435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:14 compute-1 python3.9[38437]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765099993.441988-1024-207626559697501/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:33:14 compute-1 sudo[38435]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:15 compute-1 sudo[38587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urzxhiyncxdhzcybystzvbbqtlorldrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099994.8886144-1069-123915169563319/AnsiballZ_systemd.py'
Dec 07 09:33:15 compute-1 sudo[38587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:15 compute-1 python3.9[38589]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:33:15 compute-1 systemd[1]: Starting Load Kernel Modules...
Dec 07 09:33:15 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 07 09:33:15 compute-1 kernel: Bridge firewalling registered
Dec 07 09:33:15 compute-1 systemd-modules-load[38593]: Inserted module 'br_netfilter'
Dec 07 09:33:15 compute-1 systemd[1]: Finished Load Kernel Modules.
Dec 07 09:33:16 compute-1 sudo[38587]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:17 compute-1 sudo[38747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sylojaereqzagkohdzgdmumbrxvmftjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099997.1401722-1093-22967322158116/AnsiballZ_stat.py'
Dec 07 09:33:17 compute-1 sudo[38747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:17 compute-1 python3.9[38749]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:33:17 compute-1 sudo[38747]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:17 compute-1 sudo[38870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypasexfztbjsinjstxxceobejpxzhvbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099997.1401722-1093-22967322158116/AnsiballZ_copy.py'
Dec 07 09:33:17 compute-1 sudo[38870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:18 compute-1 python3.9[38872]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765099997.1401722-1093-22967322158116/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:33:18 compute-1 sudo[38870]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:19 compute-1 sudo[39022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fszaoakpgzawfbjpatgwgqjfavohkgkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765099998.8555753-1147-73036192642043/AnsiballZ_dnf.py'
Dec 07 09:33:19 compute-1 sudo[39022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:19 compute-1 python3.9[39024]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:33:23 compute-1 dbus-broker-launch[740]: Noticed file-system modification, trigger reload.
Dec 07 09:33:23 compute-1 dbus-broker-launch[740]: Noticed file-system modification, trigger reload.
Dec 07 09:33:24 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 07 09:33:24 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 07 09:33:24 compute-1 systemd[1]: Reloading.
Dec 07 09:33:24 compute-1 systemd-rc-local-generator[39085]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:33:24 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 07 09:33:26 compute-1 sudo[39022]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:27 compute-1 python3.9[40252]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:33:27 compute-1 python3.9[41160]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 07 09:33:28 compute-1 python3.9[42182]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:33:29 compute-1 sudo[42877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfhcbbfqmqdwbfwurrhsmiehogazccyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100009.1010308-1264-187481690074901/AnsiballZ_command.py'
Dec 07 09:33:29 compute-1 sudo[42877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:30 compute-1 python3.9[42879]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:33:30 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 07 09:33:30 compute-1 systemd[1]: Starting Authorization Manager...
Dec 07 09:33:30 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 07 09:33:30 compute-1 polkitd[43436]: Started polkitd version 0.117
Dec 07 09:33:30 compute-1 polkitd[43436]: Loading rules from directory /etc/polkit-1/rules.d
Dec 07 09:33:30 compute-1 polkitd[43436]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 07 09:33:30 compute-1 polkitd[43436]: Finished loading, compiling and executing 2 rules
Dec 07 09:33:30 compute-1 systemd[1]: Started Authorization Manager.
Dec 07 09:33:30 compute-1 polkitd[43436]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 07 09:33:30 compute-1 sudo[42877]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:30 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 07 09:33:30 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 07 09:33:30 compute-1 systemd[1]: man-db-cache-update.service: Consumed 5.048s CPU time.
Dec 07 09:33:30 compute-1 systemd[1]: run-r2b6a8d6712974fa9a2d5f8571d261f72.service: Deactivated successfully.
Dec 07 09:33:31 compute-1 sudo[43606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulykujshzasnqhdcqsfvzmfudbicysqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100011.2486365-1291-186227431433060/AnsiballZ_systemd.py'
Dec 07 09:33:31 compute-1 sudo[43606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:31 compute-1 python3.9[43608]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:33:31 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 07 09:33:31 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Dec 07 09:33:31 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 07 09:33:31 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 07 09:33:32 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 07 09:33:32 compute-1 sudo[43606]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:33 compute-1 python3.9[43771]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 07 09:33:36 compute-1 sudo[43921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdfyucghkqniltbkgicwgoiqwrfeotug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100016.5681176-1463-260832233117895/AnsiballZ_systemd.py'
Dec 07 09:33:36 compute-1 sudo[43921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:37 compute-1 python3.9[43923]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:33:38 compute-1 systemd[1]: Reloading.
Dec 07 09:33:38 compute-1 systemd-rc-local-generator[43955]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:33:38 compute-1 sudo[43921]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:39 compute-1 sudo[44111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oimuuadwoahrvmbxeyswqgjfqohmfeoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100018.758393-1463-91927137256214/AnsiballZ_systemd.py'
Dec 07 09:33:39 compute-1 sudo[44111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:39 compute-1 python3.9[44113]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:33:39 compute-1 systemd[1]: Reloading.
Dec 07 09:33:39 compute-1 systemd-rc-local-generator[44142]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:33:39 compute-1 sudo[44111]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:40 compute-1 sudo[44300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofglyjpkzwuyhlwfujfplvhgcyalwwyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100020.2333581-1510-184955627900718/AnsiballZ_command.py'
Dec 07 09:33:40 compute-1 sudo[44300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:40 compute-1 python3.9[44302]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:33:40 compute-1 sudo[44300]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:41 compute-1 sudo[44453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsyevvmfgnuqtgmpsunkawqkryxmtrvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100021.1649055-1534-62631768660995/AnsiballZ_command.py'
Dec 07 09:33:41 compute-1 sudo[44453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:41 compute-1 python3.9[44455]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:33:41 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 07 09:33:41 compute-1 sudo[44453]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:42 compute-1 sudo[44606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyfnvwwvifsrropoygtlzdksjdtelnhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100021.9278316-1558-223452030218025/AnsiballZ_command.py'
Dec 07 09:33:42 compute-1 sudo[44606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:42 compute-1 python3.9[44608]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:33:43 compute-1 sudo[44606]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:44 compute-1 sudo[44768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isejgkfhoyaulqyljfwmkdymavipuqyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100024.1772518-1582-22796695081846/AnsiballZ_command.py'
Dec 07 09:33:44 compute-1 sudo[44768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:44 compute-1 python3.9[44770]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:33:44 compute-1 sudo[44768]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:45 compute-1 sudo[44921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uymahpbfgponfxvypzpapjcseoogvsef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100024.9691045-1606-162631133844930/AnsiballZ_systemd.py'
Dec 07 09:33:45 compute-1 sudo[44921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:45 compute-1 python3.9[44923]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:33:45 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 07 09:33:45 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Dec 07 09:33:45 compute-1 systemd[1]: Stopping Apply Kernel Variables...
Dec 07 09:33:45 compute-1 systemd[1]: Starting Apply Kernel Variables...
Dec 07 09:33:45 compute-1 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 07 09:33:45 compute-1 systemd[1]: Finished Apply Kernel Variables.
Dec 07 09:33:45 compute-1 sudo[44921]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:46 compute-1 sshd-session[31302]: Connection closed by 192.168.122.30 port 54432
Dec 07 09:33:46 compute-1 sshd-session[31299]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:33:46 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Dec 07 09:33:46 compute-1 systemd[1]: session-10.scope: Consumed 2min 13.910s CPU time.
Dec 07 09:33:46 compute-1 systemd-logind[796]: Session 10 logged out. Waiting for processes to exit.
Dec 07 09:33:46 compute-1 systemd-logind[796]: Removed session 10.
Dec 07 09:33:51 compute-1 sshd-session[44954]: Accepted publickey for zuul from 192.168.122.30 port 33242 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:33:51 compute-1 systemd-logind[796]: New session 11 of user zuul.
Dec 07 09:33:51 compute-1 systemd[1]: Started Session 11 of User zuul.
Dec 07 09:33:51 compute-1 sshd-session[44954]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:33:52 compute-1 python3.9[45107]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:33:53 compute-1 sudo[45261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrpqlxwmpwxfrsxvoaefigdhzficsfkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100032.8960485-69-161337164555814/AnsiballZ_getent.py'
Dec 07 09:33:53 compute-1 sudo[45261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:53 compute-1 python3.9[45263]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 07 09:33:53 compute-1 sudo[45261]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:54 compute-1 sudo[45414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vacnuyohstfuilwslupmoalurrzwkifk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100033.7811399-93-18834708849075/AnsiballZ_group.py'
Dec 07 09:33:54 compute-1 sudo[45414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:54 compute-1 python3.9[45416]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 07 09:33:54 compute-1 groupadd[45417]: group added to /etc/group: name=openvswitch, GID=42476
Dec 07 09:33:54 compute-1 groupadd[45417]: group added to /etc/gshadow: name=openvswitch
Dec 07 09:33:54 compute-1 groupadd[45417]: new group: name=openvswitch, GID=42476
Dec 07 09:33:54 compute-1 sudo[45414]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:55 compute-1 sudo[45572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raogwjvixjhvgjrqpkxburxteewgbctw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100034.6270285-117-254387210522116/AnsiballZ_user.py'
Dec 07 09:33:55 compute-1 sudo[45572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:55 compute-1 python3.9[45574]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 07 09:33:55 compute-1 useradd[45576]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 07 09:33:55 compute-1 useradd[45576]: add 'openvswitch' to group 'hugetlbfs'
Dec 07 09:33:55 compute-1 useradd[45576]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 07 09:33:55 compute-1 sudo[45572]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:56 compute-1 sudo[45732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuxgcheduiktvbcdnhtkqlsgugvrjnxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100035.814596-147-210881773401264/AnsiballZ_setup.py'
Dec 07 09:33:56 compute-1 sudo[45732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:56 compute-1 python3.9[45734]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:33:56 compute-1 sudo[45732]: pam_unix(sudo:session): session closed for user root
Dec 07 09:33:57 compute-1 sudo[45816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpbjraibtomfxxjeogcuyvkpgttodajr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100035.814596-147-210881773401264/AnsiballZ_dnf.py'
Dec 07 09:33:57 compute-1 sudo[45816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:33:57 compute-1 python3.9[45818]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 07 09:33:59 compute-1 sudo[45816]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:00 compute-1 sudo[45980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjvsqlxotykakwnkpwswlpdxwlqvhcfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100039.8303587-189-106388160390182/AnsiballZ_dnf.py'
Dec 07 09:34:00 compute-1 sudo[45980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:00 compute-1 python3.9[45982]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:34:12 compute-1 kernel: SELinux:  Converting 2730 SID table entries...
Dec 07 09:34:12 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 07 09:34:12 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 07 09:34:12 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 07 09:34:12 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 07 09:34:12 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 07 09:34:12 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 07 09:34:12 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 07 09:34:12 compute-1 groupadd[46005]: group added to /etc/group: name=unbound, GID=993
Dec 07 09:34:12 compute-1 groupadd[46005]: group added to /etc/gshadow: name=unbound
Dec 07 09:34:12 compute-1 groupadd[46005]: new group: name=unbound, GID=993
Dec 07 09:34:12 compute-1 useradd[46012]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 07 09:34:12 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 07 09:34:12 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 07 09:34:14 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 07 09:34:14 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 07 09:34:14 compute-1 systemd[1]: Reloading.
Dec 07 09:34:14 compute-1 systemd-rc-local-generator[46510]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:34:14 compute-1 systemd-sysv-generator[46513]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:34:14 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 07 09:34:15 compute-1 sudo[45980]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:15 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 07 09:34:15 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 07 09:34:15 compute-1 systemd[1]: run-r763bcfa54c4c455ca2e579e951532fb2.service: Deactivated successfully.
Dec 07 09:34:16 compute-1 sudo[47077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fijnsfohujqoghfevyygaegyqxtuppyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100055.9780223-213-46194055645124/AnsiballZ_systemd.py'
Dec 07 09:34:16 compute-1 sudo[47077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:16 compute-1 python3.9[47079]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 07 09:34:17 compute-1 systemd[1]: Reloading.
Dec 07 09:34:17 compute-1 systemd-rc-local-generator[47111]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:34:17 compute-1 systemd-sysv-generator[47115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:34:17 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Dec 07 09:34:17 compute-1 chown[47122]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 07 09:34:17 compute-1 ovs-ctl[47127]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 07 09:34:17 compute-1 ovs-ctl[47127]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 07 09:34:17 compute-1 ovs-ctl[47127]: Starting ovsdb-server [  OK  ]
Dec 07 09:34:17 compute-1 ovs-vsctl[47176]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 07 09:34:17 compute-1 ovs-vsctl[47196]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"e231b22a-cdf9-44dd-ad96-a8e48b3d52da\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 07 09:34:17 compute-1 ovs-ctl[47127]: Configuring Open vSwitch system IDs [  OK  ]
Dec 07 09:34:17 compute-1 ovs-ctl[47127]: Enabling remote OVSDB managers [  OK  ]
Dec 07 09:34:17 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Dec 07 09:34:17 compute-1 ovs-vsctl[47202]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Dec 07 09:34:17 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 07 09:34:17 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 07 09:34:17 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 07 09:34:17 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Dec 07 09:34:17 compute-1 ovs-ctl[47247]: Inserting openvswitch module [  OK  ]
Dec 07 09:34:17 compute-1 ovs-ctl[47215]: Starting ovs-vswitchd [  OK  ]
Dec 07 09:34:17 compute-1 ovs-vsctl[47264]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Dec 07 09:34:17 compute-1 ovs-ctl[47215]: Enabling remote OVSDB managers [  OK  ]
Dec 07 09:34:17 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 07 09:34:17 compute-1 systemd[1]: Starting Open vSwitch...
Dec 07 09:34:17 compute-1 systemd[1]: Finished Open vSwitch.
Dec 07 09:34:17 compute-1 sudo[47077]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:19 compute-1 python3.9[47416]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:34:20 compute-1 sudo[47566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwfyrbgznoioblnriopyspundoptjtyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100060.293373-267-18736373573314/AnsiballZ_sefcontext.py'
Dec 07 09:34:20 compute-1 sudo[47566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:21 compute-1 python3.9[47568]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 07 09:34:22 compute-1 kernel: SELinux:  Converting 2744 SID table entries...
Dec 07 09:34:22 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 07 09:34:22 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 07 09:34:22 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 07 09:34:22 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 07 09:34:22 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 07 09:34:22 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 07 09:34:22 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 07 09:34:23 compute-1 sudo[47566]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:23 compute-1 python3.9[47723]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:34:24 compute-1 sudo[47879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxxtffskkikykoerypzawdgauwwvcmrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100064.3563812-321-144968989560971/AnsiballZ_dnf.py'
Dec 07 09:34:24 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 07 09:34:24 compute-1 sudo[47879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:24 compute-1 python3.9[47881]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:34:27 compute-1 sudo[47879]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:28 compute-1 sudo[48032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkeckydtmpmxnelmfemzcauyfxloumgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100067.6928725-345-154002937368491/AnsiballZ_command.py'
Dec 07 09:34:28 compute-1 sudo[48032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:28 compute-1 python3.9[48034]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:34:29 compute-1 sudo[48032]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:29 compute-1 sudo[48319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iswvkdrqhbuabsgonrgpgdmctmjmxsdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100069.2153373-369-108983555706026/AnsiballZ_file.py'
Dec 07 09:34:29 compute-1 sudo[48319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:29 compute-1 python3.9[48321]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 07 09:34:29 compute-1 sudo[48319]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:30 compute-1 python3.9[48471]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:34:31 compute-1 sudo[48623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftsvcwklhbyjvvedhwdumbycasjukvvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100071.0900707-417-192904978202041/AnsiballZ_dnf.py'
Dec 07 09:34:31 compute-1 sudo[48623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:31 compute-1 python3.9[48625]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:34:34 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 07 09:34:34 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 07 09:34:34 compute-1 systemd[1]: Reloading.
Dec 07 09:34:34 compute-1 systemd-sysv-generator[48662]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:34:34 compute-1 systemd-rc-local-generator[48659]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:34:34 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 07 09:34:35 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 07 09:34:35 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 07 09:34:35 compute-1 systemd[1]: run-r9ef49aa67feb44edb44ba3817e622c92.service: Deactivated successfully.
Dec 07 09:34:35 compute-1 sudo[48623]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:35 compute-1 sudo[48940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlsuopakkkudkeomhwsahkvhzmgyjqtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100075.5757716-441-249500738967748/AnsiballZ_systemd.py'
Dec 07 09:34:35 compute-1 sudo[48940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:36 compute-1 python3.9[48942]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:34:36 compute-1 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 07 09:34:36 compute-1 systemd[1]: Stopped Network Manager Wait Online.
Dec 07 09:34:36 compute-1 systemd[1]: Stopping Network Manager Wait Online...
Dec 07 09:34:36 compute-1 systemd[1]: Stopping Network Manager...
Dec 07 09:34:36 compute-1 NetworkManager[7202]: <info>  [1765100076.1878] caught SIGTERM, shutting down normally.
Dec 07 09:34:36 compute-1 NetworkManager[7202]: <info>  [1765100076.1892] dhcp4 (eth0): canceled DHCP transaction
Dec 07 09:34:36 compute-1 NetworkManager[7202]: <info>  [1765100076.1892] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 07 09:34:36 compute-1 NetworkManager[7202]: <info>  [1765100076.1892] dhcp4 (eth0): state changed no lease
Dec 07 09:34:36 compute-1 NetworkManager[7202]: <info>  [1765100076.1895] manager: NetworkManager state is now CONNECTED_SITE
Dec 07 09:34:36 compute-1 NetworkManager[7202]: <info>  [1765100076.1956] exiting (success)
Dec 07 09:34:36 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 07 09:34:36 compute-1 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 07 09:34:36 compute-1 systemd[1]: Stopped Network Manager.
Dec 07 09:34:36 compute-1 systemd[1]: NetworkManager.service: Consumed 13.123s CPU time, 4.4M memory peak, read 0B from disk, written 11.5K to disk.
Dec 07 09:34:36 compute-1 systemd[1]: Starting Network Manager...
Dec 07 09:34:36 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.2476] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:7c26b365-f356-47da-bd1a-0ae584570406)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.2477] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.2534] manager[0x5588f5bfe090]: monitoring kernel firmware directory '/lib/firmware'.
Dec 07 09:34:36 compute-1 systemd[1]: Starting Hostname Service...
Dec 07 09:34:36 compute-1 systemd[1]: Started Hostname Service.
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3631] hostname: hostname: using hostnamed
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3631] hostname: static hostname changed from (none) to "compute-1"
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3635] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3639] manager[0x5588f5bfe090]: rfkill: Wi-Fi hardware radio set enabled
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3639] manager[0x5588f5bfe090]: rfkill: WWAN hardware radio set enabled
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3656] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3663] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3664] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3664] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3665] manager: Networking is enabled by state file
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3666] settings: Loaded settings plugin: keyfile (internal)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3669] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3689] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3695] dhcp: init: Using DHCP client 'internal'
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3698] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3701] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3705] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3712] device (lo): Activation: starting connection 'lo' (9cb2e8de-c1b3-45af-836f-38efb3fd24ca)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3718] device (eth0): carrier: link connected
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3722] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3726] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3727] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3732] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3737] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3741] device (eth1): carrier: link connected
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3745] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3748] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (9290e757-2102-5044-b397-b83b445ce6e1) (indicated)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3749] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3752] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3757] device (eth1): Activation: starting connection 'ci-private-network' (9290e757-2102-5044-b397-b83b445ce6e1)
Dec 07 09:34:36 compute-1 systemd[1]: Started Network Manager.
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3765] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3773] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3778] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3780] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3781] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3783] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3784] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3786] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3788] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3794] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3796] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3812] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3828] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3835] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3840] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3843] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3849] device (lo): Activation: successful, device activated.
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3861] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3931] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3938] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3940] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3943] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3946] device (eth1): Activation: successful, device activated.
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3957] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3961] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3963] manager: NetworkManager state is now CONNECTED_SITE
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3967] device (eth0): Activation: successful, device activated.
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3972] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 07 09:34:36 compute-1 NetworkManager[48950]: <info>  [1765100076.3977] manager: startup complete
Dec 07 09:34:36 compute-1 systemd[1]: Starting Network Manager Wait Online...
Dec 07 09:34:36 compute-1 sudo[48940]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:36 compute-1 systemd[1]: Finished Network Manager Wait Online.
Dec 07 09:34:37 compute-1 sudo[49166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xupafogioqfkowxbdccfrtdzbyhqyjub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100076.749284-465-240141375541303/AnsiballZ_dnf.py'
Dec 07 09:34:37 compute-1 sudo[49166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:37 compute-1 python3.9[49168]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:34:41 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 07 09:34:41 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 07 09:34:41 compute-1 systemd[1]: Reloading.
Dec 07 09:34:41 compute-1 systemd-rc-local-generator[49217]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:34:41 compute-1 systemd-sysv-generator[49221]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:34:41 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 07 09:34:43 compute-1 sudo[49166]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:43 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 07 09:34:43 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 07 09:34:43 compute-1 systemd[1]: run-r47ca5e91dda847f0af3885f3bce07103.service: Deactivated successfully.
Dec 07 09:34:44 compute-1 sudo[49627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiniiebdnttkztyvncxqoealrxtkkkdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100084.5706367-501-262796547341707/AnsiballZ_stat.py'
Dec 07 09:34:44 compute-1 sudo[49627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:45 compute-1 python3.9[49629]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:34:45 compute-1 sudo[49627]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:45 compute-1 sudo[49779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zksyuayzlgcvpqcdrrogvcjdjuxxodup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100085.3857532-528-72959862509633/AnsiballZ_ini_file.py'
Dec 07 09:34:45 compute-1 sudo[49779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:46 compute-1 python3.9[49781]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:34:46 compute-1 sudo[49779]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:46 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 07 09:34:46 compute-1 sudo[49933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fahcbxamjxxeqlvltuifhdgtxygazspe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100086.6754296-558-116076123096196/AnsiballZ_ini_file.py'
Dec 07 09:34:46 compute-1 sudo[49933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:47 compute-1 python3.9[49935]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:34:47 compute-1 sudo[49933]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:47 compute-1 sudo[50085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhsvwkmwxaripghmzxkehwvwyhngotgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100087.2850037-558-56446034938748/AnsiballZ_ini_file.py'
Dec 07 09:34:47 compute-1 sudo[50085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:47 compute-1 python3.9[50087]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:34:47 compute-1 sudo[50085]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:48 compute-1 sudo[50237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpiaqsauxhnfmlkkkgxgsqfbokgxdypk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100088.1996875-603-192739844701704/AnsiballZ_ini_file.py'
Dec 07 09:34:48 compute-1 sudo[50237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:48 compute-1 python3.9[50239]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:34:48 compute-1 sudo[50237]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:49 compute-1 sudo[50389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iriqpmchqoirlfhnyyjhukrcusldlfdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100088.8027067-603-155475566854136/AnsiballZ_ini_file.py'
Dec 07 09:34:49 compute-1 sudo[50389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:49 compute-1 python3.9[50391]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:34:49 compute-1 sudo[50389]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:50 compute-1 sudo[50541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrskjowhhwnjjshybblehzykertfpqmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100089.827756-648-272540178086952/AnsiballZ_stat.py'
Dec 07 09:34:50 compute-1 sudo[50541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:50 compute-1 python3.9[50543]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:34:50 compute-1 sudo[50541]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:50 compute-1 sudo[50664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnffsrydrxvyjskcwslenglvpxfjrtst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100089.827756-648-272540178086952/AnsiballZ_copy.py'
Dec 07 09:34:50 compute-1 sudo[50664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:50 compute-1 python3.9[50666]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100089.827756-648-272540178086952/.source _original_basename=.8uf3n049 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:34:50 compute-1 sudo[50664]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:51 compute-1 sudo[50816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abjsdygtcidvnnvvdpustdsoepjzpjpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100091.288974-693-66879012632938/AnsiballZ_file.py'
Dec 07 09:34:51 compute-1 sudo[50816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:51 compute-1 python3.9[50818]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:34:51 compute-1 sudo[50816]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:52 compute-1 sudo[50968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emtyxukmdgluvsmgklubsqaacfpibvyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100092.384063-717-6577393744270/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 07 09:34:52 compute-1 sudo[50968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:53 compute-1 python3.9[50970]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 07 09:34:53 compute-1 sudo[50968]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:53 compute-1 sudo[51120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vivhrrknuykjkzoomwdhldqlgirjlnfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100093.3571804-744-221339402379055/AnsiballZ_file.py'
Dec 07 09:34:53 compute-1 sudo[51120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:53 compute-1 python3.9[51122]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:34:53 compute-1 sudo[51120]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:54 compute-1 sudo[51272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttjpamudjowcktbpvvjhfkcsfpskaujm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100094.3631551-774-121768783166075/AnsiballZ_stat.py'
Dec 07 09:34:54 compute-1 sudo[51272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:54 compute-1 sudo[51272]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:55 compute-1 sudo[51395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwlenhqmsltuhsdozvpymdfxsloudlhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100094.3631551-774-121768783166075/AnsiballZ_copy.py'
Dec 07 09:34:55 compute-1 sudo[51395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:55 compute-1 sudo[51395]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:56 compute-1 sudo[51547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emqijxazpzlgpotdjrlrhiejtnrscomf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100095.8041255-819-148620421277629/AnsiballZ_slurp.py'
Dec 07 09:34:56 compute-1 sudo[51547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:56 compute-1 python3.9[51549]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 07 09:34:56 compute-1 sudo[51547]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:57 compute-1 sudo[51722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vajeppglozgfslxmngrvwthlnrjevkbn ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100096.9202583-846-280436486496681/async_wrapper.py j427436882251 300 /home/zuul/.ansible/tmp/ansible-tmp-1765100096.9202583-846-280436486496681/AnsiballZ_edpm_os_net_config.py _'
Dec 07 09:34:57 compute-1 sudo[51722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:34:57 compute-1 ansible-async_wrapper.py[51724]: Invoked with j427436882251 300 /home/zuul/.ansible/tmp/ansible-tmp-1765100096.9202583-846-280436486496681/AnsiballZ_edpm_os_net_config.py _
Dec 07 09:34:57 compute-1 ansible-async_wrapper.py[51727]: Starting module and watcher
Dec 07 09:34:57 compute-1 ansible-async_wrapper.py[51727]: Start watching 51728 (300)
Dec 07 09:34:57 compute-1 ansible-async_wrapper.py[51728]: Start module (51728)
Dec 07 09:34:57 compute-1 ansible-async_wrapper.py[51724]: Return async_wrapper task started.
Dec 07 09:34:57 compute-1 sudo[51722]: pam_unix(sudo:session): session closed for user root
Dec 07 09:34:58 compute-1 python3.9[51729]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 07 09:34:58 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 07 09:34:58 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 07 09:34:58 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 07 09:34:58 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 07 09:34:58 compute-1 kernel: cfg80211: failed to load regulatory.db
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9002] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9023] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9590] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9595] audit: op="connection-add" uuid="39ac4798-cd51-4b28-97ea-bb05e8bfc468" name="br-ex-br" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9620] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9623] audit: op="connection-add" uuid="352d8458-0c98-4294-9fe6-e7509e2749a7" name="br-ex-port" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9644] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9648] audit: op="connection-add" uuid="08096981-6158-4a0b-8e3e-5968170e5c68" name="eth1-port" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9670] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9674] audit: op="connection-add" uuid="14e7720a-7bd9-4787-aec4-577dec63a1f6" name="vlan20-port" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9695] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9701] audit: op="connection-add" uuid="4314300e-f924-49df-bbf3-15daf3298863" name="vlan21-port" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9724] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9727] audit: op="connection-add" uuid="c3ed1a7b-f0d1-451e-9260-702b9e5c0508" name="vlan22-port" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9745] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9748] audit: op="connection-add" uuid="f0583b18-69d8-454d-8341-413c38a4498d" name="vlan23-port" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9782] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9811] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9813] audit: op="connection-add" uuid="92ab58a2-d795-4772-9402-9d9490a76be7" name="br-ex-if" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9861] audit: op="connection-update" uuid="9290e757-2102-5044-b397-b83b445ce6e1" name="ci-private-network" args="ovs-interface.type,connection.port-type,connection.controller,connection.master,connection.slave-type,connection.timestamp,ipv6.method,ipv6.dns,ipv6.routes,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.addresses,ipv4.method,ipv4.dns,ipv4.never-default,ipv4.routes,ipv4.routing-rules,ipv4.addresses,ovs-external-ids.data" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9900] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9902] audit: op="connection-add" uuid="2a5644ba-22d5-4e2c-a72c-4a00e605793a" name="vlan20-if" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9929] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9932] audit: op="connection-add" uuid="934bdc4b-032b-4b32-916e-5e81753f8642" name="vlan21-if" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9959] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9961] audit: op="connection-add" uuid="752d7ecf-6a68-4c6b-ad9b-f61e828c3f12" name="vlan22-if" pid=51730 uid=0 result="success"
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9989] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 07 09:34:59 compute-1 NetworkManager[48950]: <info>  [1765100099.9991] audit: op="connection-add" uuid="a4b2aae7-4438-4ab6-af20-e4b79f0e0dde" name="vlan23-if" pid=51730 uid=0 result="success"
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0010] audit: op="connection-delete" uuid="3cf2197f-10b3-3a0c-8bb4-f9a7144ab181" name="Wired connection 1" pid=51730 uid=0 result="success"
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0030] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0044] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0052] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (39ac4798-cd51-4b28-97ea-bb05e8bfc468)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0053] audit: op="connection-activate" uuid="39ac4798-cd51-4b28-97ea-bb05e8bfc468" name="br-ex-br" pid=51730 uid=0 result="success"
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0055] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0067] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0074] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (352d8458-0c98-4294-9fe6-e7509e2749a7)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0077] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0086] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0093] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (08096981-6158-4a0b-8e3e-5968170e5c68)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0095] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0107] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0114] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (14e7720a-7bd9-4787-aec4-577dec63a1f6)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0118] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0129] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0136] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (4314300e-f924-49df-bbf3-15daf3298863)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0140] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0151] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0159] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (c3ed1a7b-f0d1-451e-9260-702b9e5c0508)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0162] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0174] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0181] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (f0583b18-69d8-454d-8341-413c38a4498d)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0182] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0187] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0189] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0200] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0208] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0214] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (92ab58a2-d795-4772-9402-9d9490a76be7)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0215] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0221] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0224] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0225] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0228] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0247] device (eth1): disconnecting for new activation request.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0248] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0252] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0254] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0256] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0260] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0269] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0275] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (2a5644ba-22d5-4e2c-a72c-4a00e605793a)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0276] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0281] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0286] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0289] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0294] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0301] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0307] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (934bdc4b-032b-4b32-916e-5e81753f8642)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0308] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0311] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0314] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0317] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0321] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0328] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0335] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (752d7ecf-6a68-4c6b-ad9b-f61e828c3f12)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0336] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0340] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0342] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0344] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0352] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0361] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0368] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (a4b2aae7-4438-4ab6-af20-e4b79f0e0dde)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0369] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0373] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0375] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0377] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0379] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0400] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51730 uid=0 result="success"
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0403] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0411] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0414] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0426] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0434] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 kernel: ovs-system: entered promiscuous mode
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0441] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0458] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0461] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 systemd-udevd[51734]: Network interface NamePolicy= disabled on kernel command line.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0472] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0479] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 kernel: Timeout policy base is empty
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0486] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0490] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0502] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0513] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0522] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0527] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0546] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0557] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0566] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0571] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0583] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0593] dhcp4 (eth0): canceled DHCP transaction
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0593] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0593] dhcp4 (eth0): state changed no lease
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0597] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0620] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0628] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51730 uid=0 result="fail" reason="Device is not activated"
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0641] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0652] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0663] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0670] dhcp4 (eth0): state changed new lease, address=38.102.83.74
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0679] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0684] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0765] device (eth1): Activation: starting connection 'ci-private-network' (9290e757-2102-5044-b397-b83b445ce6e1)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0770] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0772] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0773] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0775] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0777] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0778] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0780] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0782] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0793] device (eth1): disconnecting for new activation request.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0794] audit: op="connection-activate" uuid="9290e757-2102-5044-b397-b83b445ce6e1" name="ci-private-network" pid=51730 uid=0 result="success"
Dec 07 09:35:00 compute-1 kernel: br-ex: entered promiscuous mode
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0856] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0863] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0870] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0874] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0880] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0886] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0891] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0899] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0905] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0912] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0918] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0923] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0929] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0937] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0943] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0954] device (eth1): Activation: starting connection 'ci-private-network' (9290e757-2102-5044-b397-b83b445ce6e1)
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0959] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51730 uid=0 result="success"
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0972] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.0978] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 kernel: vlan22: entered promiscuous mode
Dec 07 09:35:00 compute-1 systemd-udevd[51736]: Network interface NamePolicy= disabled on kernel command line.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1020] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1062] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 07 09:35:00 compute-1 kernel: vlan23: entered promiscuous mode
Dec 07 09:35:00 compute-1 systemd-udevd[51735]: Network interface NamePolicy= disabled on kernel command line.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1077] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1080] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1100] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1111] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1113] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1122] device (eth1): Activation: successful, device activated.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1135] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 kernel: vlan21: entered promiscuous mode
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1138] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 systemd-udevd[51840]: Network interface NamePolicy= disabled on kernel command line.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1145] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1204] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1208] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 07 09:35:00 compute-1 kernel: vlan20: entered promiscuous mode
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1247] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1258] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1279] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1296] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1304] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1308] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1313] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1321] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1328] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1335] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1341] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1348] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1357] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1370] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1392] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1441] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1443] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 07 09:35:00 compute-1 NetworkManager[48950]: <info>  [1765100100.1449] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 07 09:35:01 compute-1 NetworkManager[48950]: <info>  [1765100101.3023] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51730 uid=0 result="success"
Dec 07 09:35:01 compute-1 NetworkManager[48950]: <info>  [1765100101.5314] checkpoint[0x5588f5bd4950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 07 09:35:01 compute-1 NetworkManager[48950]: <info>  [1765100101.5318] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51730 uid=0 result="success"
Dec 07 09:35:01 compute-1 sudo[52092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdwgecmttcsghkkotkiaqgediiqrlklm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100101.2674057-846-53107506794367/AnsiballZ_async_status.py'
Dec 07 09:35:01 compute-1 sudo[52092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:01 compute-1 NetworkManager[48950]: <info>  [1765100101.9493] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51730 uid=0 result="success"
Dec 07 09:35:01 compute-1 NetworkManager[48950]: <info>  [1765100101.9508] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51730 uid=0 result="success"
Dec 07 09:35:01 compute-1 python3.9[52095]: ansible-ansible.legacy.async_status Invoked with jid=j427436882251.51724 mode=status _async_dir=/root/.ansible_async
Dec 07 09:35:01 compute-1 sudo[52092]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:02 compute-1 NetworkManager[48950]: <info>  [1765100102.1832] audit: op="networking-control" arg="global-dns-configuration" pid=51730 uid=0 result="success"
Dec 07 09:35:02 compute-1 NetworkManager[48950]: <info>  [1765100102.1858] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 07 09:35:02 compute-1 NetworkManager[48950]: <info>  [1765100102.1888] audit: op="networking-control" arg="global-dns-configuration" pid=51730 uid=0 result="success"
Dec 07 09:35:02 compute-1 NetworkManager[48950]: <info>  [1765100102.1911] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51730 uid=0 result="success"
Dec 07 09:35:02 compute-1 NetworkManager[48950]: <info>  [1765100102.3548] checkpoint[0x5588f5bd4a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 07 09:35:02 compute-1 NetworkManager[48950]: <info>  [1765100102.3553] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51730 uid=0 result="success"
Dec 07 09:35:02 compute-1 ansible-async_wrapper.py[51728]: Module complete (51728)
Dec 07 09:35:02 compute-1 ansible-async_wrapper.py[51727]: Done in kid B.
Dec 07 09:35:05 compute-1 sudo[52198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzmtkwwtmfjhdyyrmqncgvpfnxeqvgqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100101.2674057-846-53107506794367/AnsiballZ_async_status.py'
Dec 07 09:35:05 compute-1 sudo[52198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:05 compute-1 python3.9[52200]: ansible-ansible.legacy.async_status Invoked with jid=j427436882251.51724 mode=status _async_dir=/root/.ansible_async
Dec 07 09:35:05 compute-1 sudo[52198]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:05 compute-1 sudo[52297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvpvmkjdrlwjxbeplcztjlxvohqmojka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100101.2674057-846-53107506794367/AnsiballZ_async_status.py'
Dec 07 09:35:05 compute-1 sudo[52297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:06 compute-1 python3.9[52299]: ansible-ansible.legacy.async_status Invoked with jid=j427436882251.51724 mode=cleanup _async_dir=/root/.ansible_async
Dec 07 09:35:06 compute-1 sudo[52297]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:06 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 07 09:35:06 compute-1 sudo[52451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyewkjakwtsvhbezxmbiarskwgyucsaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100106.4280608-927-63372186123571/AnsiballZ_stat.py'
Dec 07 09:35:06 compute-1 sudo[52451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:06 compute-1 python3.9[52453]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:35:07 compute-1 sudo[52451]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:07 compute-1 sudo[52574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shjexbyvpvuckgykxpxbezebvdgzpizp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100106.4280608-927-63372186123571/AnsiballZ_copy.py'
Dec 07 09:35:07 compute-1 sudo[52574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:07 compute-1 python3.9[52576]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100106.4280608-927-63372186123571/.source.returncode _original_basename=.pq0w2vae follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:35:07 compute-1 sudo[52574]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:08 compute-1 sudo[52726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqetzvnrzfeqvffrhmbetnwgdyjbygzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100107.9021342-975-1402645671079/AnsiballZ_stat.py'
Dec 07 09:35:08 compute-1 sudo[52726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:08 compute-1 python3.9[52728]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:35:08 compute-1 sudo[52726]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:08 compute-1 sudo[52850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpzppsivhofchcptkxuqpytnbkouinld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100107.9021342-975-1402645671079/AnsiballZ_copy.py'
Dec 07 09:35:08 compute-1 sudo[52850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:09 compute-1 python3.9[52852]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100107.9021342-975-1402645671079/.source.cfg _original_basename=.lp97zvus follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:35:09 compute-1 sudo[52850]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:09 compute-1 sudo[53002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wekttzincmlldzsilcfpiwqnwacdcvbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100109.4459956-1020-162800025929475/AnsiballZ_systemd.py'
Dec 07 09:35:09 compute-1 sudo[53002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:10 compute-1 python3.9[53004]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:35:10 compute-1 systemd[1]: Reloading Network Manager...
Dec 07 09:35:10 compute-1 NetworkManager[48950]: <info>  [1765100110.1810] audit: op="reload" arg="0" pid=53008 uid=0 result="success"
Dec 07 09:35:10 compute-1 NetworkManager[48950]: <info>  [1765100110.1820] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 07 09:35:10 compute-1 systemd[1]: Reloaded Network Manager.
Dec 07 09:35:10 compute-1 sudo[53002]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:10 compute-1 sshd-session[44957]: Connection closed by 192.168.122.30 port 33242
Dec 07 09:35:10 compute-1 sshd-session[44954]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:35:10 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Dec 07 09:35:10 compute-1 systemd[1]: session-11.scope: Consumed 49.103s CPU time.
Dec 07 09:35:10 compute-1 systemd-logind[796]: Session 11 logged out. Waiting for processes to exit.
Dec 07 09:35:10 compute-1 systemd-logind[796]: Removed session 11.
Dec 07 09:35:16 compute-1 sshd-session[53039]: Accepted publickey for zuul from 192.168.122.30 port 60846 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:35:16 compute-1 systemd-logind[796]: New session 12 of user zuul.
Dec 07 09:35:16 compute-1 systemd[1]: Started Session 12 of User zuul.
Dec 07 09:35:16 compute-1 sshd-session[53039]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:35:17 compute-1 python3.9[53192]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:35:18 compute-1 python3.9[53347]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:35:19 compute-1 python3.9[53540]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:35:20 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 07 09:35:20 compute-1 sshd-session[53042]: Connection closed by 192.168.122.30 port 60846
Dec 07 09:35:20 compute-1 sshd-session[53039]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:35:20 compute-1 systemd-logind[796]: Session 12 logged out. Waiting for processes to exit.
Dec 07 09:35:20 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Dec 07 09:35:20 compute-1 systemd[1]: session-12.scope: Consumed 2.473s CPU time.
Dec 07 09:35:20 compute-1 systemd-logind[796]: Removed session 12.
Dec 07 09:35:25 compute-1 sshd-session[53570]: Accepted publickey for zuul from 192.168.122.30 port 51442 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:35:25 compute-1 systemd-logind[796]: New session 13 of user zuul.
Dec 07 09:35:25 compute-1 systemd[1]: Started Session 13 of User zuul.
Dec 07 09:35:25 compute-1 sshd-session[53570]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:35:26 compute-1 python3.9[53723]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:35:28 compute-1 python3.9[53878]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:35:28 compute-1 sudo[54032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sufaqulwvcvdxfkbmvigzpgyesdctllw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100128.4301095-81-96048542579546/AnsiballZ_setup.py'
Dec 07 09:35:28 compute-1 sudo[54032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:29 compute-1 python3.9[54034]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:35:29 compute-1 sudo[54032]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:29 compute-1 sudo[54116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhszpygiiyukjaazeqzgjkmvrtceyupg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100128.4301095-81-96048542579546/AnsiballZ_dnf.py'
Dec 07 09:35:29 compute-1 sudo[54116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:29 compute-1 python3.9[54118]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:35:31 compute-1 sudo[54116]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:31 compute-1 sudo[54270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmnqcjpxmubofipbrmtbuyzkhmyvvbmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100131.5562563-117-96927505149396/AnsiballZ_setup.py'
Dec 07 09:35:31 compute-1 sudo[54270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:32 compute-1 python3.9[54272]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:35:32 compute-1 sudo[54270]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:33 compute-1 sudo[54465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkgkuslrabkvatibpbjgamxowqcrbmou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100132.8786812-150-246208732125124/AnsiballZ_file.py'
Dec 07 09:35:33 compute-1 sudo[54465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:33 compute-1 python3.9[54467]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:35:33 compute-1 sudo[54465]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:34 compute-1 sudo[54617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjzcaldlnxiuegddqaqgudmakpbnjrbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100133.790275-174-159337516343420/AnsiballZ_command.py'
Dec 07 09:35:34 compute-1 sudo[54617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:34 compute-1 python3.9[54619]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:35:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat889324225-merged.mount: Deactivated successfully.
Dec 07 09:35:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck3323386573-merged.mount: Deactivated successfully.
Dec 07 09:35:34 compute-1 podman[54620]: 2025-12-07 09:35:34.462113242 +0000 UTC m=+0.048451217 system refresh
Dec 07 09:35:34 compute-1 sudo[54617]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:35 compute-1 sudo[54780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltfntjeffxidrewpgezwsqnmkesyqldq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100134.7923121-198-96287570916330/AnsiballZ_stat.py'
Dec 07 09:35:35 compute-1 sudo[54780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:35 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:35:35 compute-1 python3.9[54782]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:35:35 compute-1 sudo[54780]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:36 compute-1 sudo[54903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afwovbfsfkofusmrvncfchymqtjcxyrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100134.7923121-198-96287570916330/AnsiballZ_copy.py'
Dec 07 09:35:36 compute-1 sudo[54903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:36 compute-1 python3.9[54905]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100134.7923121-198-96287570916330/.source.json follow=False _original_basename=podman_network_config.j2 checksum=3480a8dcbef6064d470bfe5c9faeecfd876689e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:35:36 compute-1 sudo[54903]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:36 compute-1 sudo[55055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eloippygokicqhijvawextnqpkkfrtau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100136.5170412-243-43648687762703/AnsiballZ_stat.py'
Dec 07 09:35:36 compute-1 sudo[55055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:36 compute-1 python3.9[55057]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:35:36 compute-1 sudo[55055]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:37 compute-1 sudo[55178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erjjlbtivyflscbskscebztnmcnhbfki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100136.5170412-243-43648687762703/AnsiballZ_copy.py'
Dec 07 09:35:37 compute-1 sudo[55178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:37 compute-1 python3.9[55180]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765100136.5170412-243-43648687762703/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:35:37 compute-1 sudo[55178]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:38 compute-1 sudo[55330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbvcgivagpkphbqlqubbaqflqwwxbuyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100137.8125818-291-170753421984652/AnsiballZ_ini_file.py'
Dec 07 09:35:38 compute-1 sudo[55330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:38 compute-1 python3.9[55332]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:35:38 compute-1 sudo[55330]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:38 compute-1 sudo[55482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uahzxlpplsgixmusyrjkpdvbgxnmrfvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100138.5465095-291-33705561382123/AnsiballZ_ini_file.py'
Dec 07 09:35:38 compute-1 sudo[55482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:39 compute-1 python3.9[55484]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:35:39 compute-1 sudo[55482]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:39 compute-1 sudo[55634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joemieiatbdiwrxdcifhmuapdjqepdtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100139.1992724-291-147704139893192/AnsiballZ_ini_file.py'
Dec 07 09:35:39 compute-1 sudo[55634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:39 compute-1 python3.9[55636]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:35:39 compute-1 sudo[55634]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:40 compute-1 sudo[55786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tinsieiobyalrlhkkcakrlttpvkwxgyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100139.9060414-291-132880238402149/AnsiballZ_ini_file.py'
Dec 07 09:35:40 compute-1 sudo[55786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:40 compute-1 python3.9[55788]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:35:40 compute-1 sudo[55786]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:40 compute-1 sudo[55938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqephhkmlfjrmzmqsitqknaxughqonoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100140.6666455-384-38386448191715/AnsiballZ_dnf.py'
Dec 07 09:35:40 compute-1 sudo[55938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:41 compute-1 python3.9[55940]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:35:42 compute-1 sudo[55938]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:43 compute-1 sudo[56091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obpbvmwufgomclyjiteifxgwnuawtgks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100143.2364762-417-243918004448482/AnsiballZ_setup.py'
Dec 07 09:35:43 compute-1 sudo[56091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:43 compute-1 python3.9[56093]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:35:43 compute-1 sudo[56091]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:44 compute-1 sudo[56245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmbnoewpzqmkqndljqhsixpfmezgipkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100144.1010149-441-218563647796894/AnsiballZ_stat.py'
Dec 07 09:35:44 compute-1 sudo[56245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:44 compute-1 python3.9[56247]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:35:44 compute-1 sudo[56245]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:45 compute-1 sudo[56397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxnjargplpaqmkdcuidsxronjamysqem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100144.9292324-468-134098862161985/AnsiballZ_stat.py'
Dec 07 09:35:45 compute-1 sudo[56397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:45 compute-1 python3.9[56399]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:35:45 compute-1 sudo[56397]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:46 compute-1 sudo[56549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoactwqzvlvdyniacbupvbzdyjwewabj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100145.8378587-498-206907415878161/AnsiballZ_command.py'
Dec 07 09:35:46 compute-1 sudo[56549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:46 compute-1 python3.9[56551]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:35:46 compute-1 sudo[56549]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:47 compute-1 sudo[56702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jynogxfwkpmcbgpenvnnoxdvfdzyythl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100147.148846-528-234438185853567/AnsiballZ_service_facts.py'
Dec 07 09:35:47 compute-1 sudo[56702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:47 compute-1 python3.9[56704]: ansible-service_facts Invoked
Dec 07 09:35:47 compute-1 network[56721]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 07 09:35:47 compute-1 network[56722]: 'network-scripts' will be removed from distribution in near future.
Dec 07 09:35:47 compute-1 network[56723]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 07 09:35:51 compute-1 sudo[56702]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:52 compute-1 sudo[57006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkgexqsysbhmxkqpvcbxgbjcwzozfbat ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765100152.576394-573-147013022013164/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765100152.576394-573-147013022013164/args'
Dec 07 09:35:52 compute-1 sudo[57006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:52 compute-1 sudo[57006]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:53 compute-1 sudo[57173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwnvxlonxybhxnjatflljzseozbrihro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100153.406999-606-98884983872975/AnsiballZ_dnf.py'
Dec 07 09:35:53 compute-1 sudo[57173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:53 compute-1 python3.9[57175]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:35:55 compute-1 sudo[57173]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:56 compute-1 sudo[57326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hynpklouuktufrbdwuhgdmfepfvbghjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100155.8270733-646-174768175736933/AnsiballZ_package_facts.py'
Dec 07 09:35:56 compute-1 sudo[57326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:56 compute-1 python3.9[57328]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 07 09:35:56 compute-1 sudo[57326]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:57 compute-1 sudo[57478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuylvzgeqmhrxempqfecciakupjsiupm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100157.5645988-676-19536698760537/AnsiballZ_stat.py'
Dec 07 09:35:57 compute-1 sudo[57478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:58 compute-1 python3.9[57480]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:35:58 compute-1 sudo[57478]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:58 compute-1 sudo[57603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsqyfrliffmmexzbykdiejsnhkgswqtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100157.5645988-676-19536698760537/AnsiballZ_copy.py'
Dec 07 09:35:58 compute-1 sudo[57603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:58 compute-1 python3.9[57605]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100157.5645988-676-19536698760537/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:35:58 compute-1 sudo[57603]: pam_unix(sudo:session): session closed for user root
Dec 07 09:35:59 compute-1 sudo[57757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jycbikiuwxejaifhaylexdrhatzywoup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100159.133762-720-33482396040624/AnsiballZ_stat.py'
Dec 07 09:35:59 compute-1 sudo[57757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:35:59 compute-1 python3.9[57759]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:35:59 compute-1 sudo[57757]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:00 compute-1 sudo[57882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-equlyaixrmlcreqbonotzmaqqumrzaog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100159.133762-720-33482396040624/AnsiballZ_copy.py'
Dec 07 09:36:00 compute-1 sudo[57882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:00 compute-1 python3.9[57884]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100159.133762-720-33482396040624/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:00 compute-1 sudo[57882]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:01 compute-1 anacron[4155]: Job `cron.weekly' started
Dec 07 09:36:01 compute-1 anacron[4155]: Job `cron.weekly' terminated
Dec 07 09:36:01 compute-1 sudo[58038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kftfmkfymdaccjlwagmmgfddqgtyugzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100161.442039-784-221827244714654/AnsiballZ_lineinfile.py'
Dec 07 09:36:01 compute-1 sudo[58038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:02 compute-1 python3.9[58040]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:02 compute-1 sudo[58038]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:03 compute-1 sudo[58192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcgpzqlxwllwjbqqywiyieporicguknz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100163.15153-829-37642974188966/AnsiballZ_setup.py'
Dec 07 09:36:03 compute-1 sudo[58192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:03 compute-1 python3.9[58194]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:36:04 compute-1 sudo[58192]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:04 compute-1 sudo[58276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovtbbthppbjxzqvfsajccfohoquotttk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100163.15153-829-37642974188966/AnsiballZ_systemd.py'
Dec 07 09:36:04 compute-1 sudo[58276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:04 compute-1 python3.9[58278]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:36:04 compute-1 sudo[58276]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:05 compute-1 sudo[58430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aedpotdzbvsguqyrimxwjnuzodaqcjqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100165.5514696-876-196408844601114/AnsiballZ_setup.py'
Dec 07 09:36:05 compute-1 sudo[58430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:06 compute-1 python3.9[58432]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:36:06 compute-1 sudo[58430]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:06 compute-1 sudo[58514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jechsnvgmmqdumrhligqvudchswwdiww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100165.5514696-876-196408844601114/AnsiballZ_systemd.py'
Dec 07 09:36:06 compute-1 sudo[58514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:06 compute-1 python3.9[58516]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:36:06 compute-1 chronyd[789]: chronyd exiting
Dec 07 09:36:06 compute-1 systemd[1]: Stopping NTP client/server...
Dec 07 09:36:06 compute-1 systemd[1]: chronyd.service: Deactivated successfully.
Dec 07 09:36:06 compute-1 systemd[1]: Stopped NTP client/server.
Dec 07 09:36:07 compute-1 systemd[1]: Starting NTP client/server...
Dec 07 09:36:07 compute-1 chronyd[58524]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 07 09:36:07 compute-1 chronyd[58524]: Frequency -26.599 +/- 0.064 ppm read from /var/lib/chrony/drift
Dec 07 09:36:07 compute-1 chronyd[58524]: Loaded seccomp filter (level 2)
Dec 07 09:36:07 compute-1 systemd[1]: Started NTP client/server.
Dec 07 09:36:07 compute-1 sudo[58514]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:07 compute-1 sshd-session[53573]: Connection closed by 192.168.122.30 port 51442
Dec 07 09:36:07 compute-1 sshd-session[53570]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:36:07 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Dec 07 09:36:07 compute-1 systemd[1]: session-13.scope: Consumed 26.348s CPU time.
Dec 07 09:36:07 compute-1 systemd-logind[796]: Session 13 logged out. Waiting for processes to exit.
Dec 07 09:36:07 compute-1 systemd-logind[796]: Removed session 13.
Dec 07 09:36:13 compute-1 sshd-session[58550]: Accepted publickey for zuul from 192.168.122.30 port 51272 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:36:13 compute-1 systemd-logind[796]: New session 14 of user zuul.
Dec 07 09:36:13 compute-1 systemd[1]: Started Session 14 of User zuul.
Dec 07 09:36:13 compute-1 sshd-session[58550]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:36:14 compute-1 sudo[58703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feknasbatqlwchfjenctzltxpcmqfigj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100173.7664785-27-145396303234473/AnsiballZ_file.py'
Dec 07 09:36:14 compute-1 sudo[58703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:14 compute-1 python3.9[58705]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:14 compute-1 sudo[58703]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:15 compute-1 sudo[58855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knlioctuuxmudavgrhedmmsgkzbamhzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100175.0209231-63-94145566695439/AnsiballZ_stat.py'
Dec 07 09:36:15 compute-1 sudo[58855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:15 compute-1 python3.9[58857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:36:15 compute-1 sudo[58855]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:16 compute-1 sudo[58978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydydnrkfpxyocxcffdkziznmrjbkkzbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100175.0209231-63-94145566695439/AnsiballZ_copy.py'
Dec 07 09:36:16 compute-1 sudo[58978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:16 compute-1 python3.9[58980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100175.0209231-63-94145566695439/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:16 compute-1 sudo[58978]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:16 compute-1 sshd-session[58553]: Connection closed by 192.168.122.30 port 51272
Dec 07 09:36:16 compute-1 sshd-session[58550]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:36:16 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Dec 07 09:36:16 compute-1 systemd[1]: session-14.scope: Consumed 1.942s CPU time.
Dec 07 09:36:16 compute-1 systemd-logind[796]: Session 14 logged out. Waiting for processes to exit.
Dec 07 09:36:16 compute-1 systemd-logind[796]: Removed session 14.
Dec 07 09:36:22 compute-1 sshd-session[59005]: Accepted publickey for zuul from 192.168.122.30 port 45698 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:36:22 compute-1 systemd-logind[796]: New session 15 of user zuul.
Dec 07 09:36:22 compute-1 systemd[1]: Started Session 15 of User zuul.
Dec 07 09:36:22 compute-1 sshd-session[59005]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:36:23 compute-1 python3.9[59158]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:36:24 compute-1 sudo[59312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfmbplminfavosvzyfbagjrgoaelpfrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100184.146712-60-49118895525377/AnsiballZ_file.py'
Dec 07 09:36:24 compute-1 sudo[59312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:24 compute-1 python3.9[59314]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:24 compute-1 sudo[59312]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:25 compute-1 sudo[59487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-papmzlavokbpzmlxpnucwmtrbrraixfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100184.9802012-84-275172828739264/AnsiballZ_stat.py'
Dec 07 09:36:25 compute-1 sudo[59487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:25 compute-1 python3.9[59489]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:36:25 compute-1 sudo[59487]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:26 compute-1 sudo[59610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tolhttkaxnotnazmjygkxfisytexlabn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100184.9802012-84-275172828739264/AnsiballZ_copy.py'
Dec 07 09:36:26 compute-1 sudo[59610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:26 compute-1 python3.9[59612]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765100184.9802012-84-275172828739264/.source.json _original_basename=.wzrnmxiq follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:26 compute-1 sudo[59610]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:27 compute-1 sudo[59762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssmnzvarybfawvkxzackoskvhhebfroq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100187.0994284-153-267727701478538/AnsiballZ_stat.py'
Dec 07 09:36:27 compute-1 sudo[59762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:27 compute-1 python3.9[59764]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:36:27 compute-1 sudo[59762]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:27 compute-1 sudo[59885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twmfrpvdlqlkyaqwbvitzoodpbicyfow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100187.0994284-153-267727701478538/AnsiballZ_copy.py'
Dec 07 09:36:27 compute-1 sudo[59885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:28 compute-1 python3.9[59887]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100187.0994284-153-267727701478538/.source _original_basename=.3i4tokqp follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:28 compute-1 sudo[59885]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:28 compute-1 sudo[60037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-curwlfodfjmbvxzazhacukjeiuzfbzjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100188.5441163-201-208772684056712/AnsiballZ_file.py'
Dec 07 09:36:28 compute-1 sudo[60037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:29 compute-1 python3.9[60039]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:36:29 compute-1 sudo[60037]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:29 compute-1 sudo[60189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysjdnewxtjullcavjbassaylmkbdowsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100189.3249245-225-181757883359463/AnsiballZ_stat.py'
Dec 07 09:36:29 compute-1 sudo[60189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:29 compute-1 python3.9[60191]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:36:29 compute-1 sudo[60189]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:30 compute-1 sudo[60312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djtwondatzauavqevtqiwxatxgfkcqsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100189.3249245-225-181757883359463/AnsiballZ_copy.py'
Dec 07 09:36:30 compute-1 sudo[60312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:30 compute-1 python3.9[60314]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765100189.3249245-225-181757883359463/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:36:30 compute-1 sudo[60312]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:30 compute-1 sudo[60464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emdhirirmkqwkmxzrodpynhwjehwnary ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100190.6729474-225-153992694386349/AnsiballZ_stat.py'
Dec 07 09:36:30 compute-1 sudo[60464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:31 compute-1 python3.9[60466]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:36:31 compute-1 sudo[60464]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:31 compute-1 sudo[60587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edyaukywpbgilcautfpfqalxtcbgsmmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100190.6729474-225-153992694386349/AnsiballZ_copy.py'
Dec 07 09:36:31 compute-1 sudo[60587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:31 compute-1 python3.9[60589]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765100190.6729474-225-153992694386349/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:36:31 compute-1 sudo[60587]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:32 compute-1 sudo[60739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvdxvcbfgngvszpzcshgptcexoruiaei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100192.0924196-312-117057201899381/AnsiballZ_file.py'
Dec 07 09:36:32 compute-1 sudo[60739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:32 compute-1 python3.9[60741]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:32 compute-1 sudo[60739]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:33 compute-1 sudo[60891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-humbyxpkobkqgjftreztdmbtdlcgeglv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100192.8592043-336-180869313313908/AnsiballZ_stat.py'
Dec 07 09:36:33 compute-1 sudo[60891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:33 compute-1 python3.9[60893]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:36:33 compute-1 sudo[60891]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:33 compute-1 sudo[61014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arfamxpzkbdqpeqreqsvlqrhoixotrzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100192.8592043-336-180869313313908/AnsiballZ_copy.py'
Dec 07 09:36:33 compute-1 sudo[61014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:33 compute-1 python3.9[61016]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100192.8592043-336-180869313313908/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:33 compute-1 sudo[61014]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:34 compute-1 sudo[61166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igczppzvkrnriwfoxsrxnyvvothrzizf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100194.397525-381-138450279223200/AnsiballZ_stat.py'
Dec 07 09:36:34 compute-1 sudo[61166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:34 compute-1 python3.9[61168]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:36:34 compute-1 sudo[61166]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:35 compute-1 sudo[61289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpzgoudehjjuhovptrnrcioyidzqueqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100194.397525-381-138450279223200/AnsiballZ_copy.py'
Dec 07 09:36:35 compute-1 sudo[61289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:35 compute-1 python3.9[61291]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100194.397525-381-138450279223200/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:35 compute-1 sudo[61289]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:36 compute-1 sudo[61441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khiaifunpcedryspggessszldjaczhto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100195.7932043-426-78840399043795/AnsiballZ_systemd.py'
Dec 07 09:36:36 compute-1 sudo[61441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:36 compute-1 python3.9[61443]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:36:36 compute-1 systemd[1]: Reloading.
Dec 07 09:36:36 compute-1 systemd-rc-local-generator[61472]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:36:36 compute-1 systemd-sysv-generator[61475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:36:37 compute-1 systemd[1]: Reloading.
Dec 07 09:36:37 compute-1 systemd-rc-local-generator[61510]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:36:37 compute-1 systemd-sysv-generator[61515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:36:37 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Dec 07 09:36:37 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Dec 07 09:36:37 compute-1 sudo[61441]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:38 compute-1 sudo[61670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vilhnthjmchcrfjzlugkttzwdckpbkmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100197.754945-450-280595573473998/AnsiballZ_stat.py'
Dec 07 09:36:38 compute-1 sudo[61670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:38 compute-1 python3.9[61672]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:36:38 compute-1 sudo[61670]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:38 compute-1 sudo[61793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cncnnkvcgqcqhoaefjnxgknpfdoikxjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100197.754945-450-280595573473998/AnsiballZ_copy.py'
Dec 07 09:36:38 compute-1 sudo[61793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:38 compute-1 python3.9[61795]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100197.754945-450-280595573473998/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:38 compute-1 sudo[61793]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:39 compute-1 sudo[61945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncthtcxmkqqggvewabappeisffopgtwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100199.2102065-495-273019530785551/AnsiballZ_stat.py'
Dec 07 09:36:39 compute-1 sudo[61945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:39 compute-1 python3.9[61947]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:36:39 compute-1 sudo[61945]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:40 compute-1 sudo[62068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sikogifidbpsrdxfsiltrsmrfhavzadr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100199.2102065-495-273019530785551/AnsiballZ_copy.py'
Dec 07 09:36:40 compute-1 sudo[62068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:40 compute-1 python3.9[62070]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100199.2102065-495-273019530785551/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:40 compute-1 sudo[62068]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:41 compute-1 sudo[62220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycichtiqflsakddwdiommlojylciooch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100200.6499064-540-249316445255551/AnsiballZ_systemd.py'
Dec 07 09:36:41 compute-1 sudo[62220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:41 compute-1 python3.9[62222]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:36:41 compute-1 systemd[1]: Reloading.
Dec 07 09:36:41 compute-1 systemd-sysv-generator[62256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:36:41 compute-1 systemd-rc-local-generator[62252]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:36:41 compute-1 systemd[1]: Reloading.
Dec 07 09:36:41 compute-1 systemd-rc-local-generator[62288]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:36:41 compute-1 systemd-sysv-generator[62292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:36:41 compute-1 systemd[1]: Starting Create netns directory...
Dec 07 09:36:41 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 07 09:36:41 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 07 09:36:41 compute-1 systemd[1]: Finished Create netns directory.
Dec 07 09:36:41 compute-1 sudo[62220]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:42 compute-1 python3.9[62449]: ansible-ansible.builtin.service_facts Invoked
Dec 07 09:36:42 compute-1 network[62466]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 07 09:36:42 compute-1 network[62467]: 'network-scripts' will be removed from distribution in near future.
Dec 07 09:36:42 compute-1 network[62468]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 07 09:36:49 compute-1 sudo[62728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkkciazxyurmofjttpmxwzplhfsegnnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100209.0155063-588-132243094998365/AnsiballZ_systemd.py'
Dec 07 09:36:49 compute-1 sudo[62728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:49 compute-1 python3.9[62730]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:36:49 compute-1 systemd[1]: Reloading.
Dec 07 09:36:49 compute-1 systemd-rc-local-generator[62761]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:36:49 compute-1 systemd-sysv-generator[62765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:36:49 compute-1 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 07 09:36:50 compute-1 iptables.init[62770]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 07 09:36:50 compute-1 iptables.init[62770]: iptables: Flushing firewall rules: [  OK  ]
Dec 07 09:36:50 compute-1 systemd[1]: iptables.service: Deactivated successfully.
Dec 07 09:36:50 compute-1 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 07 09:36:50 compute-1 sudo[62728]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:50 compute-1 sudo[62964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anzjpipuakfhernivagqvgpicmdbwple ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100210.5431232-588-67514756366791/AnsiballZ_systemd.py'
Dec 07 09:36:50 compute-1 sudo[62964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:51 compute-1 python3.9[62966]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:36:51 compute-1 sudo[62964]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:51 compute-1 sudo[63118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikbbwfrviilpjqupupscpkedwihyrysg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100211.6802695-636-26800731318598/AnsiballZ_systemd.py'
Dec 07 09:36:51 compute-1 sudo[63118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:52 compute-1 python3.9[63120]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:36:52 compute-1 systemd[1]: Reloading.
Dec 07 09:36:52 compute-1 systemd-rc-local-generator[63150]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:36:52 compute-1 systemd-sysv-generator[63154]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:36:52 compute-1 systemd[1]: Starting Netfilter Tables...
Dec 07 09:36:52 compute-1 systemd[1]: Finished Netfilter Tables.
Dec 07 09:36:52 compute-1 sudo[63118]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:53 compute-1 sudo[63310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ougjbpqomsjkbxajdvkfxfthaormtjvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100213.0192642-660-195639243067006/AnsiballZ_command.py'
Dec 07 09:36:53 compute-1 sudo[63310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:53 compute-1 python3.9[63312]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:36:53 compute-1 sudo[63310]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:54 compute-1 sudo[63463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xifwpztqqegygrslumynubcyuspfwddi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100214.2410727-702-257409820555392/AnsiballZ_stat.py'
Dec 07 09:36:54 compute-1 sudo[63463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:54 compute-1 python3.9[63465]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:36:54 compute-1 sudo[63463]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:55 compute-1 sudo[63588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kshnsrlxliwjeuweqdxrixfdrknggqtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100214.2410727-702-257409820555392/AnsiballZ_copy.py'
Dec 07 09:36:55 compute-1 sudo[63588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:55 compute-1 python3.9[63590]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100214.2410727-702-257409820555392/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:55 compute-1 sudo[63588]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:55 compute-1 sudo[63741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vczzeidqdpydedzkoswoqhgwainiduhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100215.6370683-747-183599619773134/AnsiballZ_systemd.py'
Dec 07 09:36:55 compute-1 sudo[63741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:56 compute-1 python3.9[63743]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:36:56 compute-1 systemd[1]: Reloading OpenSSH server daemon...
Dec 07 09:36:56 compute-1 sshd[1007]: Received SIGHUP; restarting.
Dec 07 09:36:56 compute-1 sshd[1007]: Server listening on 0.0.0.0 port 22.
Dec 07 09:36:56 compute-1 sshd[1007]: Server listening on :: port 22.
Dec 07 09:36:56 compute-1 systemd[1]: Reloaded OpenSSH server daemon.
Dec 07 09:36:56 compute-1 sudo[63741]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:56 compute-1 sudo[63897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xycgokpedmxwibsilzkehrdxdszbfvpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100216.5970352-771-119423720454628/AnsiballZ_file.py'
Dec 07 09:36:56 compute-1 sudo[63897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:57 compute-1 python3.9[63899]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:57 compute-1 sudo[63897]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:57 compute-1 sudo[64049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrtuziwpwaquxdbjncbcmdtafosqisyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100217.346263-795-186213945448782/AnsiballZ_stat.py'
Dec 07 09:36:57 compute-1 sudo[64049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:57 compute-1 python3.9[64051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:36:57 compute-1 sudo[64049]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:58 compute-1 sudo[64172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aigvohkdyuoccvabqjhrklvyyrzqcioc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100217.346263-795-186213945448782/AnsiballZ_copy.py'
Dec 07 09:36:58 compute-1 sudo[64172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:58 compute-1 python3.9[64174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100217.346263-795-186213945448782/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:36:58 compute-1 sudo[64172]: pam_unix(sudo:session): session closed for user root
Dec 07 09:36:59 compute-1 sudo[64324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqypxbgjtrrprbuxxgwblkwgbgzxgssn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100219.1902516-849-201349044659721/AnsiballZ_timezone.py'
Dec 07 09:36:59 compute-1 sudo[64324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:36:59 compute-1 python3.9[64326]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 07 09:36:59 compute-1 systemd[1]: Starting Time & Date Service...
Dec 07 09:37:00 compute-1 systemd[1]: Started Time & Date Service.
Dec 07 09:37:00 compute-1 sudo[64324]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:00 compute-1 sudo[64480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsaoayybnvmqaoaaslcrguislycyujae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100220.365056-876-227736140766430/AnsiballZ_file.py'
Dec 07 09:37:00 compute-1 sudo[64480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:00 compute-1 python3.9[64482]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:00 compute-1 sudo[64480]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:01 compute-1 sudo[64632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsexnlbwlujwgqrgazkwmuojrzksyzfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100221.1655283-900-400373892121/AnsiballZ_stat.py'
Dec 07 09:37:01 compute-1 sudo[64632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:01 compute-1 python3.9[64634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:37:01 compute-1 sudo[64632]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:02 compute-1 sudo[64755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyoivqtuwisonriauuvqtzckiozjolbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100221.1655283-900-400373892121/AnsiballZ_copy.py'
Dec 07 09:37:02 compute-1 sudo[64755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:02 compute-1 python3.9[64757]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100221.1655283-900-400373892121/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:02 compute-1 sudo[64755]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:03 compute-1 sudo[64907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrsbjkdjnizyhvdotqlaxclbttxfmxuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100222.8006008-945-39609819808384/AnsiballZ_stat.py'
Dec 07 09:37:03 compute-1 sudo[64907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:03 compute-1 python3.9[64909]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:37:03 compute-1 sudo[64907]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:03 compute-1 sudo[65030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejsgwcivvallnhlzddltffhwdnxdfhqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100222.8006008-945-39609819808384/AnsiballZ_copy.py'
Dec 07 09:37:03 compute-1 sudo[65030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:04 compute-1 python3.9[65032]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100222.8006008-945-39609819808384/.source.yaml _original_basename=.sdq3ye_m follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:04 compute-1 sudo[65030]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:04 compute-1 sudo[65182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewaqnaxgljxxbualasbcmoggpepbgunc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100224.3060765-990-164267017235878/AnsiballZ_stat.py'
Dec 07 09:37:04 compute-1 sudo[65182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:04 compute-1 python3.9[65184]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:37:04 compute-1 sudo[65182]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:05 compute-1 sudo[65305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nllojqlvimjriociaqazkjfdcfcdvipg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100224.3060765-990-164267017235878/AnsiballZ_copy.py'
Dec 07 09:37:05 compute-1 sudo[65305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:05 compute-1 python3.9[65307]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100224.3060765-990-164267017235878/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:05 compute-1 sudo[65305]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:06 compute-1 sudo[65457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqltmseqfonowqtswvbjcwriuukmhwkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100225.7307355-1035-213122322529988/AnsiballZ_command.py'
Dec 07 09:37:06 compute-1 sudo[65457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:06 compute-1 python3.9[65459]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:37:06 compute-1 sudo[65457]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:06 compute-1 sudo[65610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylkuwkbxhqxkoccjyfdgrqngxrsunusc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100226.5745258-1059-188940344744614/AnsiballZ_command.py'
Dec 07 09:37:06 compute-1 sudo[65610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:06 compute-1 python3.9[65612]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:37:07 compute-1 sudo[65610]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:07 compute-1 sudo[65763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toafpiuecqxzkmjosdmgzkogdrpnfqmb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765100227.4257967-1083-130603161809973/AnsiballZ_edpm_nftables_from_files.py'
Dec 07 09:37:07 compute-1 sudo[65763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:08 compute-1 python3[65765]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 07 09:37:08 compute-1 sudo[65763]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:08 compute-1 sudo[65915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvrkdubiwpuykxsgkvpnjibehzgeokpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100228.357136-1107-15803205951033/AnsiballZ_stat.py'
Dec 07 09:37:08 compute-1 sudo[65915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:08 compute-1 python3.9[65917]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:37:09 compute-1 sudo[65915]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:09 compute-1 sudo[66038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybpaxfflzchnkjiwgqvzdjjpfwjroddj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100228.357136-1107-15803205951033/AnsiballZ_copy.py'
Dec 07 09:37:09 compute-1 sudo[66038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:09 compute-1 python3.9[66040]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100228.357136-1107-15803205951033/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:09 compute-1 sudo[66038]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:10 compute-1 sudo[66190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pumluiyikgwjrohktdxgmhyaawtxbijz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100229.993188-1152-124201395862198/AnsiballZ_stat.py'
Dec 07 09:37:10 compute-1 sudo[66190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:10 compute-1 python3.9[66192]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:37:10 compute-1 sudo[66190]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:11 compute-1 sudo[66313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itrwxsmkhmgluevjdszbpauclbtonzao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100229.993188-1152-124201395862198/AnsiballZ_copy.py'
Dec 07 09:37:11 compute-1 sudo[66313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:11 compute-1 python3.9[66315]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100229.993188-1152-124201395862198/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:11 compute-1 sudo[66313]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:11 compute-1 sudo[66465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upgvvgafaphgmzagxcccbcalgbyvjwjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100231.5108488-1197-106755655542594/AnsiballZ_stat.py'
Dec 07 09:37:11 compute-1 sudo[66465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:12 compute-1 python3.9[66467]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:37:12 compute-1 sudo[66465]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:12 compute-1 sudo[66588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jouiuntvqbwptswpbzbonefseonbtsnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100231.5108488-1197-106755655542594/AnsiballZ_copy.py'
Dec 07 09:37:12 compute-1 sudo[66588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:12 compute-1 python3.9[66590]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100231.5108488-1197-106755655542594/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:12 compute-1 sudo[66588]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:13 compute-1 sudo[66740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tblhtayossltsjjptntncymfrfjpzsdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100232.946893-1242-143692168448086/AnsiballZ_stat.py'
Dec 07 09:37:13 compute-1 sudo[66740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:13 compute-1 python3.9[66742]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:37:13 compute-1 sudo[66740]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:13 compute-1 sudo[66863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unanloerlmvovkutmrrdtdjqmlbptikn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100232.946893-1242-143692168448086/AnsiballZ_copy.py'
Dec 07 09:37:13 compute-1 sudo[66863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:14 compute-1 python3.9[66865]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100232.946893-1242-143692168448086/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:14 compute-1 sudo[66863]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:14 compute-1 sudo[67015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcbphzpbrjyvsjtlynepfsigcqwfdiyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100234.3882902-1287-180595308020837/AnsiballZ_stat.py'
Dec 07 09:37:14 compute-1 sudo[67015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:15 compute-1 python3.9[67017]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:37:15 compute-1 sudo[67015]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:15 compute-1 sudo[67138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyltnwyjxhaeyqmnkjzfhmbelpmzzheg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100234.3882902-1287-180595308020837/AnsiballZ_copy.py'
Dec 07 09:37:15 compute-1 sudo[67138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:15 compute-1 python3.9[67140]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765100234.3882902-1287-180595308020837/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:15 compute-1 sudo[67138]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:16 compute-1 sudo[67290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtmcrodpdhfyiaoaxjkwtliavqeinqxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100235.8849554-1332-24090658695923/AnsiballZ_file.py'
Dec 07 09:37:16 compute-1 sudo[67290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:16 compute-1 python3.9[67292]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:16 compute-1 sudo[67290]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:16 compute-1 sudo[67442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouadyqfhavgbmhittgdohtmmvgensoee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100236.6481318-1356-80140791987160/AnsiballZ_command.py'
Dec 07 09:37:16 compute-1 sudo[67442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:17 compute-1 python3.9[67444]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:37:17 compute-1 sudo[67442]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:18 compute-1 sudo[67601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlhmhigxbfxqlvakduwukrmybmacuixu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100237.7171814-1381-52711325724301/AnsiballZ_blockinfile.py'
Dec 07 09:37:18 compute-1 sudo[67601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:18 compute-1 python3.9[67603]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:18 compute-1 sudo[67601]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:19 compute-1 sudo[67754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gizkkmxbqziduptlngrlemgkhehourmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100238.7645116-1407-84492910605789/AnsiballZ_file.py'
Dec 07 09:37:19 compute-1 sudo[67754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:19 compute-1 python3.9[67756]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:19 compute-1 sudo[67754]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:19 compute-1 sudo[67906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huifunomentrhtmhqnrybrnbvvskjrtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100239.5817556-1407-126075092356058/AnsiballZ_file.py'
Dec 07 09:37:19 compute-1 sudo[67906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:20 compute-1 python3.9[67908]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:20 compute-1 sudo[67906]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:20 compute-1 sudo[68058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rekfhdtuirajcfukmenrvbadpbulyugu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100240.3762205-1452-73127518757619/AnsiballZ_mount.py'
Dec 07 09:37:20 compute-1 sudo[68058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:21 compute-1 python3.9[68060]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 07 09:37:21 compute-1 sudo[68058]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:21 compute-1 sudo[68211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehhntjwggofegukjmnmkadrrgwvxzxmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100241.2088537-1452-269652913826544/AnsiballZ_mount.py'
Dec 07 09:37:21 compute-1 sudo[68211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:21 compute-1 python3.9[68213]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 07 09:37:21 compute-1 sudo[68211]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:22 compute-1 sshd-session[59008]: Connection closed by 192.168.122.30 port 45698
Dec 07 09:37:22 compute-1 sshd-session[59005]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:37:22 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Dec 07 09:37:22 compute-1 systemd[1]: session-15.scope: Consumed 39.305s CPU time.
Dec 07 09:37:22 compute-1 systemd-logind[796]: Session 15 logged out. Waiting for processes to exit.
Dec 07 09:37:22 compute-1 systemd-logind[796]: Removed session 15.
Dec 07 09:37:28 compute-1 sshd-session[68239]: Accepted publickey for zuul from 192.168.122.30 port 58168 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:37:28 compute-1 systemd-logind[796]: New session 16 of user zuul.
Dec 07 09:37:28 compute-1 systemd[1]: Started Session 16 of User zuul.
Dec 07 09:37:28 compute-1 sshd-session[68239]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:37:28 compute-1 sudo[68392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqjzppumqrljibiuisaslbqpcgpwkzxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100248.2823834-19-278202040537426/AnsiballZ_tempfile.py'
Dec 07 09:37:28 compute-1 sudo[68392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:29 compute-1 python3.9[68394]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 07 09:37:29 compute-1 sudo[68392]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:29 compute-1 sudo[68544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcovoundmvgtpragxqzntynfotbdabwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100249.358905-55-25055082048702/AnsiballZ_stat.py'
Dec 07 09:37:29 compute-1 sudo[68544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:30 compute-1 python3.9[68546]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:37:30 compute-1 sudo[68544]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:30 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 07 09:37:30 compute-1 sudo[68698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nozidcjvoavjnomuzfsvxfvgfarfxyqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100250.4074516-85-185367753033146/AnsiballZ_setup.py'
Dec 07 09:37:30 compute-1 sudo[68698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:31 compute-1 python3.9[68700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:37:31 compute-1 sudo[68698]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:31 compute-1 sudo[68850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdctlwfsopgaxmpozcpfbtiofkmyaywn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100251.554329-110-277637120048077/AnsiballZ_blockinfile.py'
Dec 07 09:37:31 compute-1 sudo[68850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:32 compute-1 python3.9[68852]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDztIgdvWfbGcTBsnJ/M+7HPF8fmQq/y+Bl35+zFajL3KlZAwT5Jrd0wBJFCENJp3TXe2vCz5X1q7WE7KkTCmfFoRuHmoqlZhTqT9s/+r8kiDatZiqCOWaKW4t/5FdXKBIVPlkry4+jUtXum7Hjaqx3CWAN9zTBaMGorSAA8LKMMvZPP0EYbAxaLgivTJ1mbZF0/ZNGo/5WQc2vAa9bAToTb0YwrajhjGwm8gpS1t7deqebzgprT7jWeXpxQZEVS/ynyQFICZ5W6covXVgsWgQNtfbmweGFQOMlP0vZE1/P3GUjWJgmaVsDrNDWdjCgiaRAZnNCC01eZyUjas+eot7B1Sg0BLS3JeORj3tIRcVI9DkuMQCdex5q/BCiz8YueUZn4qIiyvmG1max5Xui0X1LygXyNdyBWs5DbBGfPsFBLyXT1noEfYsgk5v0iu8DLl+PShKLO8xLqJMeYVYsUY8uG6qv+lA0YbVeiMomYLVXMABowwzcwzKHnlj5f+keT0=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICAU0KXuEPsaXKf0jGICVhewmjwEgAqPrkc4waZyQc7o
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBUF894VPJUzj6uHFODSSpNciOlDtn3PuhA44yhVzfkk/lOehkynDHVgBX6zwUYnOmiLJE7vHinKqWzoAVHhOas=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCip7MnZvuJx8DmLIGnIc8NcND4H8xH1hog1PQWG+WFQHEqpA3BpOSGhk8Mr5skxXappecIladNg73ReINM2gE58XsvsHhQICeXuRBK091YtVSafixD3fEvhD+xGUIukp3F6EPKU0x4WQ0xWQC38o13OyZtGRApI6AQEAxg0QMsB7qwwroH6ag7l7U4sv5nYqK3upInbblwL0LYfo6jyhHnhwZBVjv2MTJ8zZktF54SlM68fh8WQwQbA7VMqK6wEJlDRkdsIXPbq2PN6V08KJlBkBlvgXu5aTIeGQ5DdFuKQutnMEWlwiCtoJNly6Pv7PwjZnDKkPQP5RamELk/eKCRHXY5SbfmyG9VtAHHEV2f9NsjnFZRBx9ikx/H6/NpPmlMji5VbyfY1b0u0DreNZqm2bDWRcL++rsjZDfWqh2cJOF4Jan0m12bfjWDBXeGiunpl4XWydA0nbi0v4RHvH6pD2BoTuxC2rVSR233WC88Xe5HU1WoXegIy43ksMeFvGs=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYukxfCIA1Xurqi7GbVHfVTkzw++ujxQPgfwUA9AznN
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOT58aEV4d46XVVznwJYUJL8kuqtWeT85ng6XRArVPbONJirV0BPyfS1SwB7SxPwywavSEowgTdPM8QvrYiA0kE=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVimUYmVq1jwN4I5i4nI9XPpovC84bLnjioQY6MxnDdHWaEfuEub8qpNrfkTCFppybs82dXQEl9witk6tAj8GQQGfFN/IfI+GFHby5G2bWpOumixFRFVkhc3QW9inlnJNA0TMzwlbz5LOkL9/ShhCpshMnBGNjKJFaH5GvlqpWCYYAotq1zbwd6SRIu4O5cPa3+7mFmXKtlFl28oAFp3NMsNJ9wbIWhXeOcfUSNbrL52O30C6TKW8HiBC2kfg578bm0Pa6r2iMvPHhW7kMm5eQwUfB5l5JKgIsDJmaKjLej/4U7hO52yut7hfnV3O8qK0ZpD2xEwhe9OneH4tKueT63SehDENUIJWAasPiPrlHWkfm6PWhKwPMBu3Vuir/4R1SA6ZIJEzQeGq/nUuSBtbDZC4jDuXb8oywpR/uCaBgZbziPhqBMIegQDMvKeQGQmZn6V+eKkfv3I9Z83LbQRXEnIWiuf4XRp1btGZYv0+Q7zgiD+dw9QxCgWkdWxA9SoM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEWDyTOT2SMCqj8YwhAvKshXrBfGOObG4cDM9r5B2FZj
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJuP9cUBko1m6+714/2inXnWXQqIN7Sx7/A0GBQAjM8bAkICVNXZtk9Pu38lY43gxHx3nZ57o3Dpp2ak8tsjrR4=
                                             create=True mode=0644 path=/tmp/ansible.yd7m200u state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:32 compute-1 sudo[68850]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:32 compute-1 sudo[69002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hythcaitzehobpcbyfmjzoqjporzctcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100252.4006834-134-138120128957663/AnsiballZ_command.py'
Dec 07 09:37:32 compute-1 sudo[69002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:33 compute-1 python3.9[69004]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.yd7m200u' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:37:33 compute-1 sudo[69002]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:33 compute-1 sudo[69156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wphjkdytfrutyqvbhxmdzentrpzrlckt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100253.3040645-158-204190384792934/AnsiballZ_file.py'
Dec 07 09:37:33 compute-1 sudo[69156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:33 compute-1 python3.9[69158]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.yd7m200u state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:33 compute-1 sudo[69156]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:34 compute-1 sshd-session[68242]: Connection closed by 192.168.122.30 port 58168
Dec 07 09:37:34 compute-1 sshd-session[68239]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:37:34 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Dec 07 09:37:34 compute-1 systemd[1]: session-16.scope: Consumed 4.012s CPU time.
Dec 07 09:37:34 compute-1 systemd-logind[796]: Session 16 logged out. Waiting for processes to exit.
Dec 07 09:37:34 compute-1 systemd-logind[796]: Removed session 16.
Dec 07 09:37:39 compute-1 sshd-session[69183]: Accepted publickey for zuul from 192.168.122.30 port 51318 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:37:39 compute-1 systemd-logind[796]: New session 17 of user zuul.
Dec 07 09:37:39 compute-1 systemd[1]: Started Session 17 of User zuul.
Dec 07 09:37:39 compute-1 sshd-session[69183]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:37:40 compute-1 python3.9[69336]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:37:41 compute-1 sudo[69490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvvcgajddbldpbzaxkxxtzlbjlastdus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100260.7906692-57-11017404706514/AnsiballZ_systemd.py'
Dec 07 09:37:41 compute-1 sudo[69490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:41 compute-1 python3.9[69492]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 07 09:37:41 compute-1 sudo[69490]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:42 compute-1 sudo[69644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvuowxaispcsirslbxsonocvkpfmcydz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100262.0815458-81-34523296017805/AnsiballZ_systemd.py'
Dec 07 09:37:42 compute-1 sudo[69644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:42 compute-1 python3.9[69646]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:37:42 compute-1 sudo[69644]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:43 compute-1 sudo[69797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pldatmqmziqgieimjcxcatucsciwtatl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100263.041541-108-23986393185236/AnsiballZ_command.py'
Dec 07 09:37:43 compute-1 sudo[69797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:43 compute-1 python3.9[69799]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:37:43 compute-1 sudo[69797]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:44 compute-1 sudo[69950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meuddcfymwtbrpkbjyturxjnnwrsslcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100263.9550126-132-32090212902660/AnsiballZ_stat.py'
Dec 07 09:37:44 compute-1 sudo[69950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:44 compute-1 python3.9[69952]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:37:44 compute-1 sudo[69950]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:45 compute-1 sudo[70104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpovgejifjzfmqsocvztauuoggxunbyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100264.879091-156-110865422184592/AnsiballZ_command.py'
Dec 07 09:37:45 compute-1 sudo[70104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:45 compute-1 python3.9[70106]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:37:45 compute-1 sudo[70104]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:46 compute-1 sudo[70259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujvhxhgtabksbdgctjkggahketrqybqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100265.6759427-180-5133936047706/AnsiballZ_file.py'
Dec 07 09:37:46 compute-1 sudo[70259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:46 compute-1 python3.9[70261]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:37:46 compute-1 sudo[70259]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:46 compute-1 sshd-session[69186]: Connection closed by 192.168.122.30 port 51318
Dec 07 09:37:46 compute-1 sshd-session[69183]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:37:46 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Dec 07 09:37:46 compute-1 systemd[1]: session-17.scope: Consumed 4.998s CPU time.
Dec 07 09:37:46 compute-1 systemd-logind[796]: Session 17 logged out. Waiting for processes to exit.
Dec 07 09:37:47 compute-1 systemd-logind[796]: Removed session 17.
Dec 07 09:37:52 compute-1 sshd-session[70286]: Accepted publickey for zuul from 192.168.122.30 port 54442 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:37:52 compute-1 systemd-logind[796]: New session 18 of user zuul.
Dec 07 09:37:52 compute-1 systemd[1]: Started Session 18 of User zuul.
Dec 07 09:37:52 compute-1 sshd-session[70286]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:37:53 compute-1 python3.9[70439]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:37:54 compute-1 sudo[70593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqumynntphvdfifzazjmvzjwkgfbxxvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100273.9373777-63-19039849796125/AnsiballZ_setup.py'
Dec 07 09:37:54 compute-1 sudo[70593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:54 compute-1 python3.9[70595]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:37:54 compute-1 sudo[70593]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:55 compute-1 sudo[70677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myrunvemjdlnfbpznxbuzdfurrsnudby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100273.9373777-63-19039849796125/AnsiballZ_dnf.py'
Dec 07 09:37:55 compute-1 sudo[70677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:37:55 compute-1 python3.9[70679]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 07 09:37:56 compute-1 sudo[70677]: pam_unix(sudo:session): session closed for user root
Dec 07 09:37:57 compute-1 python3.9[70830]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:37:59 compute-1 python3.9[70981]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 07 09:38:00 compute-1 python3.9[71131]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:38:00 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 09:38:00 compute-1 python3.9[71282]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:38:01 compute-1 sshd-session[70289]: Connection closed by 192.168.122.30 port 54442
Dec 07 09:38:01 compute-1 sshd-session[70286]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:38:01 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Dec 07 09:38:01 compute-1 systemd[1]: session-18.scope: Consumed 6.380s CPU time.
Dec 07 09:38:01 compute-1 systemd-logind[796]: Session 18 logged out. Waiting for processes to exit.
Dec 07 09:38:01 compute-1 systemd-logind[796]: Removed session 18.
Dec 07 09:38:10 compute-1 sshd-session[71307]: Accepted publickey for zuul from 38.102.83.80 port 46360 ssh2: RSA SHA256:hct83ililSSWAsGgD0ULsAQ0r1pHbrJ2CU75MFgoHRo
Dec 07 09:38:10 compute-1 systemd-logind[796]: New session 19 of user zuul.
Dec 07 09:38:10 compute-1 systemd[1]: Started Session 19 of User zuul.
Dec 07 09:38:10 compute-1 sshd-session[71307]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:38:10 compute-1 sudo[71383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uozkybgjahayjjzvgxhioigidtxggkmu ; /usr/bin/python3'
Dec 07 09:38:10 compute-1 sudo[71383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:10 compute-1 useradd[71387]: new group: name=ceph-admin, GID=42478
Dec 07 09:38:10 compute-1 useradd[71387]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 07 09:38:10 compute-1 sudo[71383]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:11 compute-1 sudo[71469]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwoejdjfytdsacpkizzqrqlhbsdegqqf ; /usr/bin/python3'
Dec 07 09:38:11 compute-1 sudo[71469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:11 compute-1 sudo[71469]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:11 compute-1 sudo[71542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qevnvlhlibsbwiwsgzxhhmbxjekrdwzr ; /usr/bin/python3'
Dec 07 09:38:11 compute-1 sudo[71542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:11 compute-1 sudo[71542]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:11 compute-1 sudo[71592]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jimzhxefwafmjjrxxdyglzeqqmaayaad ; /usr/bin/python3'
Dec 07 09:38:11 compute-1 sudo[71592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:12 compute-1 sudo[71592]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:12 compute-1 sudo[71618]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfcopfcpwtekgjdpumrlfuookxrkvsrg ; /usr/bin/python3'
Dec 07 09:38:12 compute-1 sudo[71618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:12 compute-1 sudo[71618]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:12 compute-1 sudo[71644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejwbyqqbrmqtpmbyptkthtgmbnxcbhyp ; /usr/bin/python3'
Dec 07 09:38:12 compute-1 sudo[71644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:12 compute-1 sudo[71644]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:13 compute-1 sudo[71671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccingbwiatkgnvhjnpcjrogwzgocouha ; /usr/bin/python3'
Dec 07 09:38:13 compute-1 sudo[71671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:13 compute-1 sudo[71671]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:13 compute-1 sudo[71749]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhmmrffrrpjhhsdgrkhfjvondqynjuke ; /usr/bin/python3'
Dec 07 09:38:13 compute-1 sudo[71749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:13 compute-1 sudo[71749]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:14 compute-1 sudo[71822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycunrdhanymfvoldomcelguwhwiacwzl ; /usr/bin/python3'
Dec 07 09:38:14 compute-1 sudo[71822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:14 compute-1 sudo[71822]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:14 compute-1 sudo[71924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjbuhqaocvbtuzgjifenujchtrltjvdz ; /usr/bin/python3'
Dec 07 09:38:14 compute-1 sudo[71924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:14 compute-1 sudo[71924]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:15 compute-1 sudo[71997]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joplnlzpyovdowkheksinmbganjbniha ; /usr/bin/python3'
Dec 07 09:38:15 compute-1 sudo[71997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:15 compute-1 sudo[71997]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:15 compute-1 sudo[72047]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oraucgjizdzwiroahqcomjdfgifqdaiu ; /usr/bin/python3'
Dec 07 09:38:15 compute-1 sudo[72047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:16 compute-1 python3[72049]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:38:16 compute-1 sudo[72047]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:17 compute-1 chronyd[58524]: Selected source 162.159.200.1 (pool.ntp.org)
Dec 07 09:38:17 compute-1 sudo[72142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgnglqotdrxhdjfxsqqmvmcysjmukcxg ; /usr/bin/python3'
Dec 07 09:38:17 compute-1 sudo[72142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:18 compute-1 python3[72144]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 07 09:38:19 compute-1 sudo[72142]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:19 compute-1 sudo[72169]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdtgecoiprlmuouseuedqggrkispxuwk ; /usr/bin/python3'
Dec 07 09:38:19 compute-1 sudo[72169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:19 compute-1 python3[72171]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 07 09:38:19 compute-1 sudo[72169]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:19 compute-1 sudo[72195]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqtpbttzygqahnrrrovfymkttyxhhvbm ; /usr/bin/python3'
Dec 07 09:38:19 compute-1 sudo[72195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:19 compute-1 python3[72197]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:38:19 compute-1 kernel: loop: module loaded
Dec 07 09:38:19 compute-1 kernel: loop3: detected capacity change from 0 to 41943040
Dec 07 09:38:20 compute-1 sudo[72195]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:20 compute-1 sudo[72230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ommxfszglmobracjbvhehbczqpqcxnml ; /usr/bin/python3'
Dec 07 09:38:20 compute-1 sudo[72230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:20 compute-1 python3[72232]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:38:20 compute-1 lvm[72235]: PV /dev/loop3 not used.
Dec 07 09:38:20 compute-1 lvm[72244]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 07 09:38:20 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 07 09:38:20 compute-1 lvm[72246]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 07 09:38:20 compute-1 sudo[72230]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:20 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 07 09:38:21 compute-1 sudo[72322]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nitkmmpblnihlkghsvbxlnbasfalxpga ; /usr/bin/python3'
Dec 07 09:38:21 compute-1 sudo[72322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:21 compute-1 python3[72324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 07 09:38:21 compute-1 sudo[72322]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:21 compute-1 sudo[72395]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swfrxwbdhigkgieepavmjggxyjdecloz ; /usr/bin/python3'
Dec 07 09:38:21 compute-1 sudo[72395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:21 compute-1 python3[72397]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765100300.935278-36825-69332789345488/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:38:21 compute-1 sudo[72395]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:22 compute-1 sudo[72445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iseavvrwjwtggvpqfjktcipndxbmqdmu ; /usr/bin/python3'
Dec 07 09:38:22 compute-1 sudo[72445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:38:22 compute-1 python3[72447]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:38:22 compute-1 systemd[1]: Reloading.
Dec 07 09:38:22 compute-1 systemd-rc-local-generator[72477]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:38:22 compute-1 systemd-sysv-generator[72480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:38:22 compute-1 systemd[1]: Starting Ceph OSD losetup...
Dec 07 09:38:22 compute-1 bash[72487]: /dev/loop3: [64513]:4327942 (/var/lib/ceph-osd-0.img)
Dec 07 09:38:22 compute-1 systemd[1]: Finished Ceph OSD losetup.
Dec 07 09:38:22 compute-1 lvm[72488]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 07 09:38:22 compute-1 lvm[72488]: VG ceph_vg0 finished
Dec 07 09:38:22 compute-1 sudo[72445]: pam_unix(sudo:session): session closed for user root
Dec 07 09:38:25 compute-1 python3[72512]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:39:56 compute-1 sshd-session[72559]: Accepted publickey for ceph-admin from 192.168.122.100 port 37118 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:39:56 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Dec 07 09:39:56 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 07 09:39:56 compute-1 systemd-logind[796]: New session 20 of user ceph-admin.
Dec 07 09:39:56 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 07 09:39:56 compute-1 systemd[1]: Starting User Manager for UID 42477...
Dec 07 09:39:56 compute-1 systemd[72563]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:39:56 compute-1 systemd[72563]: Queued start job for default target Main User Target.
Dec 07 09:39:56 compute-1 systemd[72563]: Created slice User Application Slice.
Dec 07 09:39:56 compute-1 systemd[72563]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 07 09:39:56 compute-1 systemd[72563]: Started Daily Cleanup of User's Temporary Directories.
Dec 07 09:39:56 compute-1 systemd[72563]: Reached target Paths.
Dec 07 09:39:56 compute-1 systemd[72563]: Reached target Timers.
Dec 07 09:39:56 compute-1 systemd[72563]: Starting D-Bus User Message Bus Socket...
Dec 07 09:39:56 compute-1 systemd[72563]: Starting Create User's Volatile Files and Directories...
Dec 07 09:39:56 compute-1 systemd[72563]: Listening on D-Bus User Message Bus Socket.
Dec 07 09:39:56 compute-1 systemd[72563]: Reached target Sockets.
Dec 07 09:39:56 compute-1 sshd-session[72576]: Accepted publickey for ceph-admin from 192.168.122.100 port 37126 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:39:56 compute-1 systemd[72563]: Finished Create User's Volatile Files and Directories.
Dec 07 09:39:56 compute-1 systemd[72563]: Reached target Basic System.
Dec 07 09:39:56 compute-1 systemd[72563]: Reached target Main User Target.
Dec 07 09:39:56 compute-1 systemd[72563]: Startup finished in 135ms.
Dec 07 09:39:56 compute-1 systemd-logind[796]: New session 22 of user ceph-admin.
Dec 07 09:39:56 compute-1 systemd[1]: Started User Manager for UID 42477.
Dec 07 09:39:56 compute-1 systemd[1]: Started Session 20 of User ceph-admin.
Dec 07 09:39:56 compute-1 systemd[1]: Started Session 22 of User ceph-admin.
Dec 07 09:39:56 compute-1 sshd-session[72559]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:39:56 compute-1 sshd-session[72576]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:39:56 compute-1 sudo[72583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:39:56 compute-1 sudo[72583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:39:56 compute-1 sudo[72583]: pam_unix(sudo:session): session closed for user root
Dec 07 09:39:56 compute-1 sshd-session[72608]: Accepted publickey for ceph-admin from 192.168.122.100 port 37130 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:39:56 compute-1 systemd-logind[796]: New session 23 of user ceph-admin.
Dec 07 09:39:56 compute-1 systemd[1]: Started Session 23 of User ceph-admin.
Dec 07 09:39:56 compute-1 sshd-session[72608]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:39:57 compute-1 sudo[72612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Dec 07 09:39:57 compute-1 sudo[72612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:39:57 compute-1 sudo[72612]: pam_unix(sudo:session): session closed for user root
Dec 07 09:39:57 compute-1 sshd-session[72637]: Accepted publickey for ceph-admin from 192.168.122.100 port 37142 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:39:57 compute-1 systemd-logind[796]: New session 24 of user ceph-admin.
Dec 07 09:39:57 compute-1 systemd[1]: Started Session 24 of User ceph-admin.
Dec 07 09:39:57 compute-1 sshd-session[72637]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:39:57 compute-1 sudo[72641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 07 09:39:57 compute-1 sudo[72641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:39:57 compute-1 sudo[72641]: pam_unix(sudo:session): session closed for user root
Dec 07 09:39:57 compute-1 sshd-session[72666]: Accepted publickey for ceph-admin from 192.168.122.100 port 37144 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:39:57 compute-1 systemd-logind[796]: New session 25 of user ceph-admin.
Dec 07 09:39:57 compute-1 systemd[1]: Started Session 25 of User ceph-admin.
Dec 07 09:39:57 compute-1 sshd-session[72666]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:39:57 compute-1 sudo[72670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:39:57 compute-1 sudo[72670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:39:57 compute-1 sudo[72670]: pam_unix(sudo:session): session closed for user root
Dec 07 09:39:57 compute-1 sshd-session[72695]: Accepted publickey for ceph-admin from 192.168.122.100 port 37148 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:39:57 compute-1 systemd-logind[796]: New session 26 of user ceph-admin.
Dec 07 09:39:57 compute-1 systemd[1]: Started Session 26 of User ceph-admin.
Dec 07 09:39:57 compute-1 sshd-session[72695]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:39:58 compute-1 sudo[72699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:39:58 compute-1 sudo[72699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:39:58 compute-1 sudo[72699]: pam_unix(sudo:session): session closed for user root
Dec 07 09:39:58 compute-1 sshd-session[72724]: Accepted publickey for ceph-admin from 192.168.122.100 port 37156 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:39:58 compute-1 systemd-logind[796]: New session 27 of user ceph-admin.
Dec 07 09:39:58 compute-1 systemd[1]: Started Session 27 of User ceph-admin.
Dec 07 09:39:58 compute-1 sshd-session[72724]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:39:58 compute-1 sudo[72728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 07 09:39:58 compute-1 sudo[72728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:39:58 compute-1 sudo[72728]: pam_unix(sudo:session): session closed for user root
Dec 07 09:39:58 compute-1 sshd-session[72753]: Accepted publickey for ceph-admin from 192.168.122.100 port 37158 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:39:58 compute-1 systemd-logind[796]: New session 28 of user ceph-admin.
Dec 07 09:39:58 compute-1 systemd[1]: Started Session 28 of User ceph-admin.
Dec 07 09:39:58 compute-1 sshd-session[72753]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:39:58 compute-1 sudo[72757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:39:58 compute-1 sudo[72757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:39:58 compute-1 sudo[72757]: pam_unix(sudo:session): session closed for user root
Dec 07 09:39:58 compute-1 sshd-session[72782]: Accepted publickey for ceph-admin from 192.168.122.100 port 37174 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:39:58 compute-1 systemd-logind[796]: New session 29 of user ceph-admin.
Dec 07 09:39:58 compute-1 systemd[1]: Started Session 29 of User ceph-admin.
Dec 07 09:39:58 compute-1 sshd-session[72782]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:39:59 compute-1 sudo[72786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 07 09:39:59 compute-1 sudo[72786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:39:59 compute-1 sudo[72786]: pam_unix(sudo:session): session closed for user root
Dec 07 09:39:59 compute-1 sshd-session[72811]: Accepted publickey for ceph-admin from 192.168.122.100 port 37176 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:39:59 compute-1 systemd-logind[796]: New session 30 of user ceph-admin.
Dec 07 09:39:59 compute-1 systemd[1]: Started Session 30 of User ceph-admin.
Dec 07 09:39:59 compute-1 sshd-session[72811]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:40:00 compute-1 sshd-session[72838]: Accepted publickey for ceph-admin from 192.168.122.100 port 34902 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:40:00 compute-1 systemd-logind[796]: New session 31 of user ceph-admin.
Dec 07 09:40:00 compute-1 systemd[1]: Started Session 31 of User ceph-admin.
Dec 07 09:40:00 compute-1 sshd-session[72838]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:40:00 compute-1 sudo[72842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 07 09:40:00 compute-1 sudo[72842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:00 compute-1 sudo[72842]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:00 compute-1 sshd-session[72867]: Accepted publickey for ceph-admin from 192.168.122.100 port 34910 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:40:00 compute-1 systemd-logind[796]: New session 32 of user ceph-admin.
Dec 07 09:40:00 compute-1 systemd[1]: Started Session 32 of User ceph-admin.
Dec 07 09:40:00 compute-1 sshd-session[72867]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:40:00 compute-1 sudo[72871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Dec 07 09:40:00 compute-1 sudo[72871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:01 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:01 compute-1 sudo[72871]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:01 compute-1 sudo[72917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:40:01 compute-1 sudo[72917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:01 compute-1 sudo[72917]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:01 compute-1 sudo[72942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 07 09:40:01 compute-1 sudo[72942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:01 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:01 compute-1 sudo[72942]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:01 compute-1 sudo[72986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:40:01 compute-1 sudo[72986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:01 compute-1 sudo[72986]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:01 compute-1 sudo[73011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 07 09:40:01 compute-1 sudo[73011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:02 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:02 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:02 compute-1 sudo[73011]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:02 compute-1 sudo[73073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:40:02 compute-1 sudo[73073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:02 compute-1 sudo[73073]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:02 compute-1 sudo[73098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:40:02 compute-1 sudo[73098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:02 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:02 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73134 (sysctl)
Dec 07 09:40:02 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 07 09:40:02 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 07 09:40:02 compute-1 sudo[73098]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:02 compute-1 sudo[73156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:40:02 compute-1 sudo[73156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:02 compute-1 sudo[73156]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:02 compute-1 sudo[73181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 07 09:40:03 compute-1 sudo[73181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:03 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:03 compute-1 sudo[73181]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:03 compute-1 sudo[73226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:40:03 compute-1 sudo[73226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:03 compute-1 sudo[73226]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:03 compute-1 sudo[73251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -- inventory --format=json-pretty --filter-for-batch
Dec 07 09:40:03 compute-1 sudo[73251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:03 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat1242700705-lower\x2dmapped.mount: Deactivated successfully.
Dec 07 09:40:21 compute-1 podman[73311]: 2025-12-07 09:40:21.937326041 +0000 UTC m=+18.253286416 container create b562b64180dbfea9c08852b4ad0bb1e5cdb8cb527a09c0f3cc67f525c38e2677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_liskov, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 07 09:40:21 compute-1 podman[73311]: 2025-12-07 09:40:21.917206207 +0000 UTC m=+18.233166592 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:21 compute-1 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck3089169461-merged.mount: Deactivated successfully.
Dec 07 09:40:21 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 07 09:40:21 compute-1 systemd[1]: Started libpod-conmon-b562b64180dbfea9c08852b4ad0bb1e5cdb8cb527a09c0f3cc67f525c38e2677.scope.
Dec 07 09:40:22 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:22 compute-1 podman[73311]: 2025-12-07 09:40:22.051273541 +0000 UTC m=+18.367233936 container init b562b64180dbfea9c08852b4ad0bb1e5cdb8cb527a09c0f3cc67f525c38e2677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_liskov, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 07 09:40:22 compute-1 podman[73311]: 2025-12-07 09:40:22.059662414 +0000 UTC m=+18.375622819 container start b562b64180dbfea9c08852b4ad0bb1e5cdb8cb527a09c0f3cc67f525c38e2677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 09:40:22 compute-1 podman[73311]: 2025-12-07 09:40:22.063663501 +0000 UTC m=+18.379623916 container attach b562b64180dbfea9c08852b4ad0bb1e5cdb8cb527a09c0f3cc67f525c38e2677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_liskov, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:40:22 compute-1 mystifying_liskov[73370]: 167 167
Dec 07 09:40:22 compute-1 systemd[1]: libpod-b562b64180dbfea9c08852b4ad0bb1e5cdb8cb527a09c0f3cc67f525c38e2677.scope: Deactivated successfully.
Dec 07 09:40:22 compute-1 podman[73311]: 2025-12-07 09:40:22.065824698 +0000 UTC m=+18.381785073 container died b562b64180dbfea9c08852b4ad0bb1e5cdb8cb527a09c0f3cc67f525c38e2677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 07 09:40:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-d86b570ae701788a0e83e04ed792821a72d8ecb92a60ebe88ec11bbf27eb2b6c-merged.mount: Deactivated successfully.
Dec 07 09:40:22 compute-1 podman[73311]: 2025-12-07 09:40:22.102021231 +0000 UTC m=+18.417981606 container remove b562b64180dbfea9c08852b4ad0bb1e5cdb8cb527a09c0f3cc67f525c38e2677 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 07 09:40:22 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:22 compute-1 systemd[1]: libpod-conmon-b562b64180dbfea9c08852b4ad0bb1e5cdb8cb527a09c0f3cc67f525c38e2677.scope: Deactivated successfully.
Dec 07 09:40:22 compute-1 podman[73394]: 2025-12-07 09:40:22.331522582 +0000 UTC m=+0.068240025 container create 23ab9ec4d97632da84fdb5d818f4f9cd211d4c9ed5065f600f0c1acb85f36030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:40:22 compute-1 systemd[1]: Started libpod-conmon-23ab9ec4d97632da84fdb5d818f4f9cd211d4c9ed5065f600f0c1acb85f36030.scope.
Dec 07 09:40:22 compute-1 podman[73394]: 2025-12-07 09:40:22.302308856 +0000 UTC m=+0.039026359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:22 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee752142edd2fc66312069257e31c54ef3bf814b28495efaea4f1be01be27a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee752142edd2fc66312069257e31c54ef3bf814b28495efaea4f1be01be27a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:22 compute-1 podman[73394]: 2025-12-07 09:40:22.446971883 +0000 UTC m=+0.183689356 container init 23ab9ec4d97632da84fdb5d818f4f9cd211d4c9ed5065f600f0c1acb85f36030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:40:22 compute-1 podman[73394]: 2025-12-07 09:40:22.459471315 +0000 UTC m=+0.196188748 container start 23ab9ec4d97632da84fdb5d818f4f9cd211d4c9ed5065f600f0c1acb85f36030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 07 09:40:22 compute-1 podman[73394]: 2025-12-07 09:40:22.466951744 +0000 UTC m=+0.203669257 container attach 23ab9ec4d97632da84fdb5d818f4f9cd211d4c9ed5065f600f0c1acb85f36030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]: [
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:     {
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:         "available": false,
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:         "being_replaced": false,
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:         "ceph_device_lvm": false,
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:         "lsm_data": {},
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:         "lvs": [],
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:         "path": "/dev/sr0",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:         "rejected_reasons": [
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "Insufficient space (<5GB)",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "Has a FileSystem"
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:         ],
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:         "sys_api": {
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "actuators": null,
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "device_nodes": [
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:                 "sr0"
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             ],
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "devname": "sr0",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "human_readable_size": "482.00 KB",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "id_bus": "ata",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "model": "QEMU DVD-ROM",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "nr_requests": "2",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "parent": "/dev/sr0",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "partitions": {},
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "path": "/dev/sr0",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "removable": "1",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "rev": "2.5+",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "ro": "0",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "rotational": "1",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "sas_address": "",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "sas_device_handle": "",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "scheduler_mode": "mq-deadline",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "sectors": 0,
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "sectorsize": "2048",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "size": 493568.0,
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "support_discard": "2048",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "type": "disk",
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:             "vendor": "QEMU"
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:         }
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]:     }
Dec 07 09:40:23 compute-1 happy_heisenberg[73411]: ]
Dec 07 09:40:23 compute-1 systemd[1]: libpod-23ab9ec4d97632da84fdb5d818f4f9cd211d4c9ed5065f600f0c1acb85f36030.scope: Deactivated successfully.
Dec 07 09:40:23 compute-1 podman[73394]: 2025-12-07 09:40:23.187754839 +0000 UTC m=+0.924472282 container died 23ab9ec4d97632da84fdb5d818f4f9cd211d4c9ed5065f600f0c1acb85f36030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heisenberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 09:40:23 compute-1 systemd[1]: var-lib-containers-storage-overlay-bee752142edd2fc66312069257e31c54ef3bf814b28495efaea4f1be01be27a4-merged.mount: Deactivated successfully.
Dec 07 09:40:23 compute-1 podman[73394]: 2025-12-07 09:40:23.226809338 +0000 UTC m=+0.963526741 container remove 23ab9ec4d97632da84fdb5d818f4f9cd211d4c9ed5065f600f0c1acb85f36030 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Dec 07 09:40:23 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:23 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:23 compute-1 systemd[1]: libpod-conmon-23ab9ec4d97632da84fdb5d818f4f9cd211d4c9ed5065f600f0c1acb85f36030.scope: Deactivated successfully.
Dec 07 09:40:23 compute-1 sudo[73251]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:23 compute-1 sudo[74416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 07 09:40:23 compute-1 sudo[74416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:23 compute-1 sudo[74416]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:23 compute-1 sudo[74441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph
Dec 07 09:40:23 compute-1 sudo[74441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:23 compute-1 sudo[74441]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:23 compute-1 sudo[74466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:40:23 compute-1 sudo[74466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:23 compute-1 sudo[74466]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:23 compute-1 sudo[74491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:40:23 compute-1 sudo[74491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:23 compute-1 sudo[74491]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:23 compute-1 sudo[74516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:40:23 compute-1 sudo[74516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:23 compute-1 sudo[74516]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:23 compute-1 sudo[74564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:40:23 compute-1 sudo[74564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:23 compute-1 sudo[74564]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:40:24 compute-1 sudo[74589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74589]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 07 09:40:24 compute-1 sudo[74614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74614]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:40:24 compute-1 sudo[74639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74639]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:40:24 compute-1 sudo[74664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74664]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:40:24 compute-1 sudo[74689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74689]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:40:24 compute-1 sudo[74714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74714]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:40:24 compute-1 sudo[74739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74739]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:40:24 compute-1 sudo[74787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74787]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:40:24 compute-1 sudo[74812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74812]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:40:24 compute-1 sudo[74837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74837]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 07 09:40:24 compute-1 sudo[74862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74862]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph
Dec 07 09:40:24 compute-1 sudo[74887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74887]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:40:24 compute-1 sudo[74912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74912]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:24 compute-1 sudo[74937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:40:24 compute-1 sudo[74937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:24 compute-1 sudo[74937]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:25 compute-1 sudo[74962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:40:25 compute-1 sudo[74962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:25 compute-1 sudo[74962]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:25 compute-1 sudo[75010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:40:25 compute-1 sudo[75010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:25 compute-1 sudo[75010]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:25 compute-1 sudo[75035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:40:25 compute-1 sudo[75035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:25 compute-1 sudo[75035]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:25 compute-1 sudo[75060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 07 09:40:25 compute-1 sudo[75060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:25 compute-1 sudo[75060]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:25 compute-1 sudo[75085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:40:25 compute-1 sudo[75085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:25 compute-1 sudo[75085]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:25 compute-1 sudo[75110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:40:25 compute-1 sudo[75110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:25 compute-1 sudo[75110]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:25 compute-1 sudo[75135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:40:25 compute-1 sudo[75135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:25 compute-1 sudo[75135]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:25 compute-1 sudo[75160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:40:25 compute-1 sudo[75160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:25 compute-1 sudo[75160]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:25 compute-1 sudo[75185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:40:25 compute-1 sudo[75185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:25 compute-1 sudo[75185]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:25 compute-1 sudo[75233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:40:25 compute-1 sudo[75233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:25 compute-1 sudo[75233]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:25 compute-1 sudo[75258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:40:25 compute-1 sudo[75258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:25 compute-1 sudo[75258]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:25 compute-1 sudo[75283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:40:25 compute-1 sudo[75283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:25 compute-1 sudo[75283]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:26 compute-1 sudo[75308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:40:26 compute-1 sudo[75308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:26 compute-1 sudo[75308]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:26 compute-1 sudo[75333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:40:26 compute-1 sudo[75333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:26 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:26 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:26 compute-1 podman[75399]: 2025-12-07 09:40:26.641821563 +0000 UTC m=+0.110540312 container create b0e70badd2a3439c4a67f2fa219d85f4489ec2b4f5c5cc57ee829bbc2f867109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banzai, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:40:26 compute-1 podman[75399]: 2025-12-07 09:40:26.567634099 +0000 UTC m=+0.036352818 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:26 compute-1 systemd[1]: Started libpod-conmon-b0e70badd2a3439c4a67f2fa219d85f4489ec2b4f5c5cc57ee829bbc2f867109.scope.
Dec 07 09:40:26 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:26 compute-1 podman[75399]: 2025-12-07 09:40:26.812679025 +0000 UTC m=+0.281397824 container init b0e70badd2a3439c4a67f2fa219d85f4489ec2b4f5c5cc57ee829bbc2f867109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:40:26 compute-1 podman[75399]: 2025-12-07 09:40:26.823083762 +0000 UTC m=+0.291802481 container start b0e70badd2a3439c4a67f2fa219d85f4489ec2b4f5c5cc57ee829bbc2f867109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:40:26 compute-1 interesting_banzai[75415]: 167 167
Dec 07 09:40:26 compute-1 systemd[1]: libpod-b0e70badd2a3439c4a67f2fa219d85f4489ec2b4f5c5cc57ee829bbc2f867109.scope: Deactivated successfully.
Dec 07 09:40:26 compute-1 podman[75399]: 2025-12-07 09:40:26.889796455 +0000 UTC m=+0.358515204 container attach b0e70badd2a3439c4a67f2fa219d85f4489ec2b4f5c5cc57ee829bbc2f867109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:40:26 compute-1 podman[75399]: 2025-12-07 09:40:26.892790315 +0000 UTC m=+0.361509104 container died b0e70badd2a3439c4a67f2fa219d85f4489ec2b4f5c5cc57ee829bbc2f867109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 07 09:40:27 compute-1 podman[75399]: 2025-12-07 09:40:27.175513612 +0000 UTC m=+0.644232361 container remove b0e70badd2a3439c4a67f2fa219d85f4489ec2b4f5c5cc57ee829bbc2f867109 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_banzai, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:40:27 compute-1 systemd[1]: libpod-conmon-b0e70badd2a3439c4a67f2fa219d85f4489ec2b4f5c5cc57ee829bbc2f867109.scope: Deactivated successfully.
Dec 07 09:40:27 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:27 compute-1 systemd[1]: Reloading.
Dec 07 09:40:27 compute-1 systemd-rc-local-generator[75462]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:40:27 compute-1 systemd-sysv-generator[75465]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:40:27 compute-1 systemd[1]: Reloading.
Dec 07 09:40:27 compute-1 systemd-rc-local-generator[75498]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:40:27 compute-1 systemd-sysv-generator[75504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:40:27 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Dec 07 09:40:27 compute-1 systemd[1]: Reloading.
Dec 07 09:40:28 compute-1 systemd-rc-local-generator[75536]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:40:28 compute-1 systemd-sysv-generator[75540]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:40:28 compute-1 systemd[1]: Reached target Ceph cluster 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:40:28 compute-1 systemd[1]: Reloading.
Dec 07 09:40:28 compute-1 systemd-rc-local-generator[75575]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:40:28 compute-1 systemd-sysv-generator[75579]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:40:28 compute-1 systemd[1]: Reloading.
Dec 07 09:40:28 compute-1 systemd-sysv-generator[75612]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:40:28 compute-1 systemd-rc-local-generator[75608]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:40:28 compute-1 systemd[1]: Created slice Slice /system/ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:40:28 compute-1 systemd[1]: Reached target System Time Set.
Dec 07 09:40:28 compute-1 systemd[1]: Reached target System Time Synchronized.
Dec 07 09:40:28 compute-1 systemd[1]: Starting Ceph crash.compute-1 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:40:28 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:28 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 07 09:40:29 compute-1 podman[75672]: 2025-12-07 09:40:29.093974347 +0000 UTC m=+0.070236166 container create 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 07 09:40:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64209ecf644a7acad413d40bb4d609e3b1a05baaea0444d15bdb98e9f42b994/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64209ecf644a7acad413d40bb4d609e3b1a05baaea0444d15bdb98e9f42b994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:29 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64209ecf644a7acad413d40bb4d609e3b1a05baaea0444d15bdb98e9f42b994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:29 compute-1 podman[75672]: 2025-12-07 09:40:29.042874273 +0000 UTC m=+0.019136082 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:29 compute-1 podman[75672]: 2025-12-07 09:40:29.167309781 +0000 UTC m=+0.143571590 container init 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 07 09:40:29 compute-1 podman[75672]: 2025-12-07 09:40:29.177142677 +0000 UTC m=+0.153404456 container start 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Dec 07 09:40:29 compute-1 bash[75672]: 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4
Dec 07 09:40:29 compute-1 systemd[1]: Started Ceph crash.compute-1 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:40:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1[75688]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 07 09:40:29 compute-1 sudo[75333]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1[75688]: 2025-12-07T09:40:29.369+0000 7f6a879ec640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 07 09:40:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1[75688]: 2025-12-07T09:40:29.369+0000 7f6a879ec640 -1 AuthRegistry(0x7f6a800698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 07 09:40:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1[75688]: 2025-12-07T09:40:29.371+0000 7f6a879ec640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 07 09:40:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1[75688]: 2025-12-07T09:40:29.371+0000 7f6a879ec640 -1 AuthRegistry(0x7f6a879eaff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 07 09:40:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1[75688]: 2025-12-07T09:40:29.373+0000 7f6a85761640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 07 09:40:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1[75688]: 2025-12-07T09:40:29.374+0000 7f6a879ec640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 07 09:40:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1[75688]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 07 09:40:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1[75688]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 07 09:40:29 compute-1 sudo[75705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:40:29 compute-1 sudo[75705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:29 compute-1 sudo[75705]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:29 compute-1 sudo[75730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 07 09:40:29 compute-1 sudo[75730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:30 compute-1 podman[75794]: 2025-12-07 09:40:30.004143512 +0000 UTC m=+0.041019562 container create 92942ec71ba2f0af0ace07ae391d5b780671e057688f8199d7973ee06da6da46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_fermi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:40:30 compute-1 systemd[1]: Started libpod-conmon-92942ec71ba2f0af0ace07ae391d5b780671e057688f8199d7973ee06da6da46.scope.
Dec 07 09:40:30 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:30 compute-1 podman[75794]: 2025-12-07 09:40:29.985572435 +0000 UTC m=+0.022448505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:30 compute-1 podman[75794]: 2025-12-07 09:40:30.082237674 +0000 UTC m=+0.119113744 container init 92942ec71ba2f0af0ace07ae391d5b780671e057688f8199d7973ee06da6da46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_fermi, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:40:30 compute-1 podman[75794]: 2025-12-07 09:40:30.094411241 +0000 UTC m=+0.131287291 container start 92942ec71ba2f0af0ace07ae391d5b780671e057688f8199d7973ee06da6da46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:40:30 compute-1 tender_fermi[75810]: 167 167
Dec 07 09:40:30 compute-1 systemd[1]: libpod-92942ec71ba2f0af0ace07ae391d5b780671e057688f8199d7973ee06da6da46.scope: Deactivated successfully.
Dec 07 09:40:30 compute-1 conmon[75810]: conmon 92942ec71ba2f0af0ace <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92942ec71ba2f0af0ace07ae391d5b780671e057688f8199d7973ee06da6da46.scope/container/memory.events
Dec 07 09:40:30 compute-1 podman[75794]: 2025-12-07 09:40:30.099367775 +0000 UTC m=+0.136243855 container attach 92942ec71ba2f0af0ace07ae391d5b780671e057688f8199d7973ee06da6da46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:40:30 compute-1 podman[75794]: 2025-12-07 09:40:30.100314469 +0000 UTC m=+0.137190519 container died 92942ec71ba2f0af0ace07ae391d5b780671e057688f8199d7973ee06da6da46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_fermi, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:40:30 compute-1 systemd[1]: var-lib-containers-storage-overlay-ce99c341f895b62d698581554d4375573ea3769e9106a46df4a5ce7f8229a1b4-merged.mount: Deactivated successfully.
Dec 07 09:40:30 compute-1 podman[75794]: 2025-12-07 09:40:30.144361406 +0000 UTC m=+0.181237456 container remove 92942ec71ba2f0af0ace07ae391d5b780671e057688f8199d7973ee06da6da46 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Dec 07 09:40:30 compute-1 systemd[1]: libpod-conmon-92942ec71ba2f0af0ace07ae391d5b780671e057688f8199d7973ee06da6da46.scope: Deactivated successfully.
Dec 07 09:40:30 compute-1 podman[75834]: 2025-12-07 09:40:30.347026739 +0000 UTC m=+0.079351915 container create 703baf42d5e4dc81f7c779261847ef7dc8d547667c713afaa5d5ef6da2f23455 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brattain, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 07 09:40:30 compute-1 podman[75834]: 2025-12-07 09:40:30.291049613 +0000 UTC m=+0.023374829 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:30 compute-1 systemd[1]: Started libpod-conmon-703baf42d5e4dc81f7c779261847ef7dc8d547667c713afaa5d5ef6da2f23455.scope.
Dec 07 09:40:30 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfb141ac08a080cd3f4757162b89f660ce39f7f7f703be5ac0add679667aebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfb141ac08a080cd3f4757162b89f660ce39f7f7f703be5ac0add679667aebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfb141ac08a080cd3f4757162b89f660ce39f7f7f703be5ac0add679667aebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfb141ac08a080cd3f4757162b89f660ce39f7f7f703be5ac0add679667aebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfb141ac08a080cd3f4757162b89f660ce39f7f7f703be5ac0add679667aebd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:30 compute-1 podman[75834]: 2025-12-07 09:40:30.434668892 +0000 UTC m=+0.166994048 container init 703baf42d5e4dc81f7c779261847ef7dc8d547667c713afaa5d5ef6da2f23455 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brattain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 07 09:40:30 compute-1 podman[75834]: 2025-12-07 09:40:30.443890513 +0000 UTC m=+0.176215659 container start 703baf42d5e4dc81f7c779261847ef7dc8d547667c713afaa5d5ef6da2f23455 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brattain, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 09:40:30 compute-1 podman[75834]: 2025-12-07 09:40:30.447331711 +0000 UTC m=+0.179656867 container attach 703baf42d5e4dc81f7c779261847ef7dc8d547667c713afaa5d5ef6da2f23455 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:40:30 compute-1 sleepy_brattain[75850]: --> passed data devices: 0 physical, 1 LVM
Dec 07 09:40:30 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 07 09:40:30 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 07 09:40:30 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 24b45d5b-5e40-4ac8-980f-eccc62ab0425
Dec 07 09:40:31 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 07 09:40:31 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 07 09:40:31 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 07 09:40:31 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:31 compute-1 lvm[75911]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 07 09:40:31 compute-1 lvm[75911]: VG ceph_vg0 finished
Dec 07 09:40:31 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 07 09:40:31 compute-1 sleepy_brattain[75850]:  stderr: got monmap epoch 1
Dec 07 09:40:32 compute-1 sleepy_brattain[75850]: --> Creating keyring file for osd.1
Dec 07 09:40:32 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 07 09:40:32 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 07 09:40:32 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 24b45d5b-5e40-4ac8-980f-eccc62ab0425 --setuser ceph --setgroup ceph
Dec 07 09:40:35 compute-1 sleepy_brattain[75850]:  stderr: 2025-12-07T09:40:32.114+0000 7f18e2526740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec 07 09:40:35 compute-1 sleepy_brattain[75850]:  stderr: 2025-12-07T09:40:32.382+0000 7f18e2526740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 07 09:40:35 compute-1 sleepy_brattain[75850]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 07 09:40:35 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 07 09:40:35 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 07 09:40:36 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:36 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:36 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 07 09:40:36 compute-1 sleepy_brattain[75850]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 07 09:40:36 compute-1 sleepy_brattain[75850]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 07 09:40:36 compute-1 sleepy_brattain[75850]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 07 09:40:36 compute-1 systemd[1]: libpod-703baf42d5e4dc81f7c779261847ef7dc8d547667c713afaa5d5ef6da2f23455.scope: Deactivated successfully.
Dec 07 09:40:36 compute-1 systemd[1]: libpod-703baf42d5e4dc81f7c779261847ef7dc8d547667c713afaa5d5ef6da2f23455.scope: Consumed 2.067s CPU time.
Dec 07 09:40:36 compute-1 podman[76818]: 2025-12-07 09:40:36.194944859 +0000 UTC m=+0.043698989 container died 703baf42d5e4dc81f7c779261847ef7dc8d547667c713afaa5d5ef6da2f23455 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 07 09:40:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-fcfb141ac08a080cd3f4757162b89f660ce39f7f7f703be5ac0add679667aebd-merged.mount: Deactivated successfully.
Dec 07 09:40:36 compute-1 podman[76818]: 2025-12-07 09:40:36.2391132 +0000 UTC m=+0.087867269 container remove 703baf42d5e4dc81f7c779261847ef7dc8d547667c713afaa5d5ef6da2f23455 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brattain, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Dec 07 09:40:36 compute-1 systemd[1]: libpod-conmon-703baf42d5e4dc81f7c779261847ef7dc8d547667c713afaa5d5ef6da2f23455.scope: Deactivated successfully.
Dec 07 09:40:36 compute-1 sudo[75730]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:36 compute-1 sudo[76831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:40:36 compute-1 sudo[76831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:36 compute-1 sudo[76831]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:36 compute-1 sudo[76856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -- lvm list --format json
Dec 07 09:40:36 compute-1 sudo[76856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:36 compute-1 podman[76921]: 2025-12-07 09:40:36.891241229 +0000 UTC m=+0.045477883 container create e6a563e0c745aca0fa1090cdbd02c55aed10ec61af3a3f52376a1f4587d89f79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:40:36 compute-1 systemd[1]: Started libpod-conmon-e6a563e0c745aca0fa1090cdbd02c55aed10ec61af3a3f52376a1f4587d89f79.scope.
Dec 07 09:40:36 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:36 compute-1 podman[76921]: 2025-12-07 09:40:36.873155954 +0000 UTC m=+0.027392648 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:36 compute-1 podman[76921]: 2025-12-07 09:40:36.982817521 +0000 UTC m=+0.137054195 container init e6a563e0c745aca0fa1090cdbd02c55aed10ec61af3a3f52376a1f4587d89f79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 07 09:40:36 compute-1 podman[76921]: 2025-12-07 09:40:36.994017942 +0000 UTC m=+0.148254606 container start e6a563e0c745aca0fa1090cdbd02c55aed10ec61af3a3f52376a1f4587d89f79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:40:36 compute-1 podman[76921]: 2025-12-07 09:40:36.997765816 +0000 UTC m=+0.152002490 container attach e6a563e0c745aca0fa1090cdbd02c55aed10ec61af3a3f52376a1f4587d89f79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 07 09:40:36 compute-1 epic_jepsen[76938]: 167 167
Dec 07 09:40:36 compute-1 systemd[1]: libpod-e6a563e0c745aca0fa1090cdbd02c55aed10ec61af3a3f52376a1f4587d89f79.scope: Deactivated successfully.
Dec 07 09:40:36 compute-1 podman[76921]: 2025-12-07 09:40:36.999215912 +0000 UTC m=+0.153452576 container died e6a563e0c745aca0fa1090cdbd02c55aed10ec61af3a3f52376a1f4587d89f79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:40:37 compute-1 systemd[1]: var-lib-containers-storage-overlay-cb7447d58aa1ded9f78db9b387488f280e231208b1de28b116ab50678894f53e-merged.mount: Deactivated successfully.
Dec 07 09:40:37 compute-1 podman[76921]: 2025-12-07 09:40:37.039521656 +0000 UTC m=+0.193758320 container remove e6a563e0c745aca0fa1090cdbd02c55aed10ec61af3a3f52376a1f4587d89f79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_jepsen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:40:37 compute-1 systemd[1]: libpod-conmon-e6a563e0c745aca0fa1090cdbd02c55aed10ec61af3a3f52376a1f4587d89f79.scope: Deactivated successfully.
Dec 07 09:40:37 compute-1 podman[76960]: 2025-12-07 09:40:37.188233903 +0000 UTC m=+0.043659999 container create 6db4412d956757ab2575406a25a7da191f4603473ae0dbf8b32d8f56215e8cf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 07 09:40:37 compute-1 systemd[1]: Started libpod-conmon-6db4412d956757ab2575406a25a7da191f4603473ae0dbf8b32d8f56215e8cf4.scope.
Dec 07 09:40:37 compute-1 podman[76960]: 2025-12-07 09:40:37.164870525 +0000 UTC m=+0.020296601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:37 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c418c05499344cf9c7fa34633670c98e3e3270b378f875997533e4fbf0011c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c418c05499344cf9c7fa34633670c98e3e3270b378f875997533e4fbf0011c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c418c05499344cf9c7fa34633670c98e3e3270b378f875997533e4fbf0011c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c418c05499344cf9c7fa34633670c98e3e3270b378f875997533e4fbf0011c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:37 compute-1 podman[76960]: 2025-12-07 09:40:37.286065002 +0000 UTC m=+0.141491108 container init 6db4412d956757ab2575406a25a7da191f4603473ae0dbf8b32d8f56215e8cf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:40:37 compute-1 podman[76960]: 2025-12-07 09:40:37.297710534 +0000 UTC m=+0.153136600 container start 6db4412d956757ab2575406a25a7da191f4603473ae0dbf8b32d8f56215e8cf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 07 09:40:37 compute-1 podman[76960]: 2025-12-07 09:40:37.301908989 +0000 UTC m=+0.157335075 container attach 6db4412d956757ab2575406a25a7da191f4603473ae0dbf8b32d8f56215e8cf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]: {
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:     "1": [
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:         {
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             "devices": [
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "/dev/loop3"
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             ],
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             "lv_name": "ceph_lv0",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             "lv_size": "21470642176",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=xfzVfX-CHOT-Wcob-Uovv-LxeN-Pp2f-hafdIh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=75f4c9fd-539a-5e17-b55a-0a12a4e2736c,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=24b45d5b-5e40-4ac8-980f-eccc62ab0425,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             "lv_uuid": "xfzVfX-CHOT-Wcob-Uovv-LxeN-Pp2f-hafdIh",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             "name": "ceph_lv0",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             "tags": {
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.block_uuid": "xfzVfX-CHOT-Wcob-Uovv-LxeN-Pp2f-hafdIh",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.cephx_lockbox_secret": "",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.cluster_fsid": "75f4c9fd-539a-5e17-b55a-0a12a4e2736c",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.cluster_name": "ceph",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.crush_device_class": "",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.encrypted": "0",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.osd_fsid": "24b45d5b-5e40-4ac8-980f-eccc62ab0425",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.osd_id": "1",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.type": "block",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.vdo": "0",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:                 "ceph.with_tpm": "0"
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             },
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             "type": "block",
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:             "vg_name": "ceph_vg0"
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:         }
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]:     ]
Dec 07 09:40:37 compute-1 sleepy_visvesvaraya[76976]: }
Dec 07 09:40:37 compute-1 systemd[1]: libpod-6db4412d956757ab2575406a25a7da191f4603473ae0dbf8b32d8f56215e8cf4.scope: Deactivated successfully.
Dec 07 09:40:37 compute-1 podman[76960]: 2025-12-07 09:40:37.615361727 +0000 UTC m=+0.470787823 container died 6db4412d956757ab2575406a25a7da191f4603473ae0dbf8b32d8f56215e8cf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:40:37 compute-1 systemd[1]: var-lib-containers-storage-overlay-56c418c05499344cf9c7fa34633670c98e3e3270b378f875997533e4fbf0011c-merged.mount: Deactivated successfully.
Dec 07 09:40:37 compute-1 podman[76960]: 2025-12-07 09:40:37.664439571 +0000 UTC m=+0.519865637 container remove 6db4412d956757ab2575406a25a7da191f4603473ae0dbf8b32d8f56215e8cf4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_visvesvaraya, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:40:37 compute-1 systemd[1]: libpod-conmon-6db4412d956757ab2575406a25a7da191f4603473ae0dbf8b32d8f56215e8cf4.scope: Deactivated successfully.
Dec 07 09:40:37 compute-1 sudo[76856]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:37 compute-1 sudo[76997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:40:37 compute-1 sudo[76997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:37 compute-1 sudo[76997]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:37 compute-1 sudo[77022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:40:37 compute-1 sudo[77022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:38 compute-1 podman[77087]: 2025-12-07 09:40:38.250802128 +0000 UTC m=+0.037051262 container create 8e56e2cb1e2cef1ff262fe562d8655cfc7ee6758fd6768c20b0f6a310c15b0da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec 07 09:40:38 compute-1 systemd[1]: Started libpod-conmon-8e56e2cb1e2cef1ff262fe562d8655cfc7ee6758fd6768c20b0f6a310c15b0da.scope.
Dec 07 09:40:38 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:38 compute-1 podman[77087]: 2025-12-07 09:40:38.311203896 +0000 UTC m=+0.097453050 container init 8e56e2cb1e2cef1ff262fe562d8655cfc7ee6758fd6768c20b0f6a310c15b0da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:40:38 compute-1 podman[77087]: 2025-12-07 09:40:38.317238908 +0000 UTC m=+0.103488082 container start 8e56e2cb1e2cef1ff262fe562d8655cfc7ee6758fd6768c20b0f6a310c15b0da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Dec 07 09:40:38 compute-1 fervent_mclaren[77103]: 167 167
Dec 07 09:40:38 compute-1 podman[77087]: 2025-12-07 09:40:38.32091436 +0000 UTC m=+0.107163494 container attach 8e56e2cb1e2cef1ff262fe562d8655cfc7ee6758fd6768c20b0f6a310c15b0da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_mclaren, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:40:38 compute-1 systemd[1]: libpod-8e56e2cb1e2cef1ff262fe562d8655cfc7ee6758fd6768c20b0f6a310c15b0da.scope: Deactivated successfully.
Dec 07 09:40:38 compute-1 podman[77087]: 2025-12-07 09:40:38.321822702 +0000 UTC m=+0.108071836 container died 8e56e2cb1e2cef1ff262fe562d8655cfc7ee6758fd6768c20b0f6a310c15b0da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 07 09:40:38 compute-1 podman[77087]: 2025-12-07 09:40:38.233339679 +0000 UTC m=+0.019588833 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:38 compute-1 systemd[1]: var-lib-containers-storage-overlay-0df3ef5f604b0faaccbe9da7e95d65434bb630b63f063fb190e2fa40d7530a30-merged.mount: Deactivated successfully.
Dec 07 09:40:38 compute-1 podman[77087]: 2025-12-07 09:40:38.359989302 +0000 UTC m=+0.146238436 container remove 8e56e2cb1e2cef1ff262fe562d8655cfc7ee6758fd6768c20b0f6a310c15b0da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Dec 07 09:40:38 compute-1 systemd[1]: libpod-conmon-8e56e2cb1e2cef1ff262fe562d8655cfc7ee6758fd6768c20b0f6a310c15b0da.scope: Deactivated successfully.
Dec 07 09:40:38 compute-1 podman[77132]: 2025-12-07 09:40:38.638719397 +0000 UTC m=+0.048938671 container create f75394d309c98f5176a28daf9d8d52e9f04bb03c8fad8da30bc4d80e9cab0583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 07 09:40:38 compute-1 systemd[1]: Started libpod-conmon-f75394d309c98f5176a28daf9d8d52e9f04bb03c8fad8da30bc4d80e9cab0583.scope.
Dec 07 09:40:38 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0cad818b64fd18076eaa12666cdf857236d87bdc26840328b0a0f7d6374b6bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0cad818b64fd18076eaa12666cdf857236d87bdc26840328b0a0f7d6374b6bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0cad818b64fd18076eaa12666cdf857236d87bdc26840328b0a0f7d6374b6bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0cad818b64fd18076eaa12666cdf857236d87bdc26840328b0a0f7d6374b6bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0cad818b64fd18076eaa12666cdf857236d87bdc26840328b0a0f7d6374b6bb/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:38 compute-1 podman[77132]: 2025-12-07 09:40:38.624540451 +0000 UTC m=+0.034759775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:38 compute-1 podman[77132]: 2025-12-07 09:40:38.725307394 +0000 UTC m=+0.135526708 container init f75394d309c98f5176a28daf9d8d52e9f04bb03c8fad8da30bc4d80e9cab0583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 07 09:40:38 compute-1 podman[77132]: 2025-12-07 09:40:38.735422597 +0000 UTC m=+0.145641891 container start f75394d309c98f5176a28daf9d8d52e9f04bb03c8fad8da30bc4d80e9cab0583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate-test, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 07 09:40:38 compute-1 podman[77132]: 2025-12-07 09:40:38.738986497 +0000 UTC m=+0.149205781 container attach f75394d309c98f5176a28daf9d8d52e9f04bb03c8fad8da30bc4d80e9cab0583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate-test, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 09:40:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate-test[77148]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 07 09:40:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate-test[77148]:                             [--no-systemd] [--no-tmpfs]
Dec 07 09:40:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate-test[77148]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 07 09:40:38 compute-1 systemd[1]: libpod-f75394d309c98f5176a28daf9d8d52e9f04bb03c8fad8da30bc4d80e9cab0583.scope: Deactivated successfully.
Dec 07 09:40:38 compute-1 podman[77132]: 2025-12-07 09:40:38.946215875 +0000 UTC m=+0.356435159 container died f75394d309c98f5176a28daf9d8d52e9f04bb03c8fad8da30bc4d80e9cab0583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate-test, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 07 09:40:38 compute-1 systemd[1]: var-lib-containers-storage-overlay-c0cad818b64fd18076eaa12666cdf857236d87bdc26840328b0a0f7d6374b6bb-merged.mount: Deactivated successfully.
Dec 07 09:40:38 compute-1 podman[77132]: 2025-12-07 09:40:38.983867042 +0000 UTC m=+0.394086316 container remove f75394d309c98f5176a28daf9d8d52e9f04bb03c8fad8da30bc4d80e9cab0583 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:40:38 compute-1 systemd[1]: libpod-conmon-f75394d309c98f5176a28daf9d8d52e9f04bb03c8fad8da30bc4d80e9cab0583.scope: Deactivated successfully.
Dec 07 09:40:39 compute-1 systemd[1]: Reloading.
Dec 07 09:40:39 compute-1 systemd-sysv-generator[77215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:40:39 compute-1 systemd-rc-local-generator[77211]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:40:39 compute-1 systemd[1]: Reloading.
Dec 07 09:40:39 compute-1 systemd-rc-local-generator[77250]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:40:39 compute-1 systemd-sysv-generator[77254]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:40:39 compute-1 systemd[1]: Starting Ceph osd.1 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:40:39 compute-1 podman[77311]: 2025-12-07 09:40:39.925156188 +0000 UTC m=+0.036387466 container create 98c0589a1a0b5be90fd38cde9124cece5a8d47ad1cf2d59d16ccdda6c3a7e020 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 07 09:40:39 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa7a69be47e7b30ca0e96267f03550cb1a8ff77725f4d49faea932d33e75b755/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa7a69be47e7b30ca0e96267f03550cb1a8ff77725f4d49faea932d33e75b755/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa7a69be47e7b30ca0e96267f03550cb1a8ff77725f4d49faea932d33e75b755/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa7a69be47e7b30ca0e96267f03550cb1a8ff77725f4d49faea932d33e75b755/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa7a69be47e7b30ca0e96267f03550cb1a8ff77725f4d49faea932d33e75b755/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:39 compute-1 podman[77311]: 2025-12-07 09:40:39.992050729 +0000 UTC m=+0.103281887 container init 98c0589a1a0b5be90fd38cde9124cece5a8d47ad1cf2d59d16ccdda6c3a7e020 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:40:40 compute-1 podman[77311]: 2025-12-07 09:40:40.000724787 +0000 UTC m=+0.111955955 container start 98c0589a1a0b5be90fd38cde9124cece5a8d47ad1cf2d59d16ccdda6c3a7e020 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 07 09:40:40 compute-1 podman[77311]: 2025-12-07 09:40:39.907575636 +0000 UTC m=+0.018806834 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:40 compute-1 podman[77311]: 2025-12-07 09:40:40.004274176 +0000 UTC m=+0.115505354 container attach 98c0589a1a0b5be90fd38cde9124cece5a8d47ad1cf2d59d16ccdda6c3a7e020 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 07 09:40:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate[77326]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 07 09:40:40 compute-1 bash[77311]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 07 09:40:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate[77326]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 07 09:40:40 compute-1 bash[77311]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 07 09:40:40 compute-1 lvm[77408]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 07 09:40:40 compute-1 lvm[77408]: VG ceph_vg0 finished
Dec 07 09:40:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate[77326]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 07 09:40:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate[77326]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 07 09:40:40 compute-1 bash[77311]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 07 09:40:40 compute-1 bash[77311]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 07 09:40:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate[77326]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 07 09:40:40 compute-1 bash[77311]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 07 09:40:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate[77326]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 07 09:40:40 compute-1 bash[77311]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 07 09:40:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate[77326]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 07 09:40:40 compute-1 bash[77311]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 07 09:40:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate[77326]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:41 compute-1 bash[77311]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate[77326]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:41 compute-1 bash[77311]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate[77326]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 07 09:40:41 compute-1 bash[77311]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 07 09:40:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate[77326]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 07 09:40:41 compute-1 bash[77311]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 07 09:40:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate[77326]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 07 09:40:41 compute-1 bash[77311]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 07 09:40:41 compute-1 systemd[1]: libpod-98c0589a1a0b5be90fd38cde9124cece5a8d47ad1cf2d59d16ccdda6c3a7e020.scope: Deactivated successfully.
Dec 07 09:40:41 compute-1 systemd[1]: libpod-98c0589a1a0b5be90fd38cde9124cece5a8d47ad1cf2d59d16ccdda6c3a7e020.scope: Consumed 1.329s CPU time.
Dec 07 09:40:41 compute-1 podman[77501]: 2025-12-07 09:40:41.221526309 +0000 UTC m=+0.031713369 container died 98c0589a1a0b5be90fd38cde9124cece5a8d47ad1cf2d59d16ccdda6c3a7e020 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:40:41 compute-1 systemd[1]: var-lib-containers-storage-overlay-fa7a69be47e7b30ca0e96267f03550cb1a8ff77725f4d49faea932d33e75b755-merged.mount: Deactivated successfully.
Dec 07 09:40:41 compute-1 podman[77501]: 2025-12-07 09:40:41.259744999 +0000 UTC m=+0.069932039 container remove 98c0589a1a0b5be90fd38cde9124cece5a8d47ad1cf2d59d16ccdda6c3a7e020 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1-activate, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:40:41 compute-1 podman[77562]: 2025-12-07 09:40:41.502442229 +0000 UTC m=+0.061558359 container create 1dde28673287dc78fafbecdbdc11edaff5358498f0a74e59a6802b4b31e15b63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 07 09:40:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddea3c29c26ae480c74017cdbfb3f4966313e591437cca4202fd8771c971d70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddea3c29c26ae480c74017cdbfb3f4966313e591437cca4202fd8771c971d70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddea3c29c26ae480c74017cdbfb3f4966313e591437cca4202fd8771c971d70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddea3c29c26ae480c74017cdbfb3f4966313e591437cca4202fd8771c971d70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ddea3c29c26ae480c74017cdbfb3f4966313e591437cca4202fd8771c971d70/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:41 compute-1 podman[77562]: 2025-12-07 09:40:41.480387614 +0000 UTC m=+0.039503764 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:41 compute-1 podman[77562]: 2025-12-07 09:40:41.586255425 +0000 UTC m=+0.145371615 container init 1dde28673287dc78fafbecdbdc11edaff5358498f0a74e59a6802b4b31e15b63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 07 09:40:41 compute-1 podman[77562]: 2025-12-07 09:40:41.598525963 +0000 UTC m=+0.157642103 container start 1dde28673287dc78fafbecdbdc11edaff5358498f0a74e59a6802b4b31e15b63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:40:41 compute-1 bash[77562]: 1dde28673287dc78fafbecdbdc11edaff5358498f0a74e59a6802b4b31e15b63
Dec 07 09:40:41 compute-1 systemd[1]: Started Ceph osd.1 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:40:41 compute-1 sudo[77022]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:41 compute-1 ceph-osd[77581]: set uid:gid to 167:167 (ceph:ceph)
Dec 07 09:40:41 compute-1 ceph-osd[77581]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec 07 09:40:41 compute-1 ceph-osd[77581]: pidfile_write: ignore empty --pid-file
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:41 compute-1 sudo[77593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:40:41 compute-1 sudo[77593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:41 compute-1 sudo[77593]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:41 compute-1 sudo[77618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -- raw list --format json
Dec 07 09:40:41 compute-1 sudo[77618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:41 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:42 compute-1 podman[77684]: 2025-12-07 09:40:42.123964459 +0000 UTC m=+0.039960326 container create 311590fefe496067232b9fde42f1643ceee932e6fbed45bb87a44f1eb5f08e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:40:42 compute-1 systemd[1]: Started libpod-conmon-311590fefe496067232b9fde42f1643ceee932e6fbed45bb87a44f1eb5f08e5e.scope.
Dec 07 09:40:42 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:42 compute-1 podman[77684]: 2025-12-07 09:40:42.196391629 +0000 UTC m=+0.112387486 container init 311590fefe496067232b9fde42f1643ceee932e6fbed45bb87a44f1eb5f08e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lederberg, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 07 09:40:42 compute-1 podman[77684]: 2025-12-07 09:40:42.105774801 +0000 UTC m=+0.021770658 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:42 compute-1 podman[77684]: 2025-12-07 09:40:42.203346003 +0000 UTC m=+0.119341830 container start 311590fefe496067232b9fde42f1643ceee932e6fbed45bb87a44f1eb5f08e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 07 09:40:42 compute-1 podman[77684]: 2025-12-07 09:40:42.205904148 +0000 UTC m=+0.121899975 container attach 311590fefe496067232b9fde42f1643ceee932e6fbed45bb87a44f1eb5f08e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lederberg, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:40:42 compute-1 agitated_lederberg[77700]: 167 167
Dec 07 09:40:42 compute-1 systemd[1]: libpod-311590fefe496067232b9fde42f1643ceee932e6fbed45bb87a44f1eb5f08e5e.scope: Deactivated successfully.
Dec 07 09:40:42 compute-1 podman[77684]: 2025-12-07 09:40:42.209109558 +0000 UTC m=+0.125105385 container died 311590fefe496067232b9fde42f1643ceee932e6fbed45bb87a44f1eb5f08e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lederberg, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-67666921ca3c22cc5a3615d5b6287b9fc8a4d64f546ee4f9a63c2b13b445666d-merged.mount: Deactivated successfully.
Dec 07 09:40:42 compute-1 podman[77684]: 2025-12-07 09:40:42.242920608 +0000 UTC m=+0.158916435 container remove 311590fefe496067232b9fde42f1643ceee932e6fbed45bb87a44f1eb5f08e5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lederberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec 07 09:40:42 compute-1 systemd[1]: libpod-conmon-311590fefe496067232b9fde42f1643ceee932e6fbed45bb87a44f1eb5f08e5e.scope: Deactivated successfully.
Dec 07 09:40:42 compute-1 podman[77726]: 2025-12-07 09:40:42.384400764 +0000 UTC m=+0.040725455 container create 999dc65aba814d41152eff550d92bcb161477ff72bb71ac632c592b07b1756ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 09:40:42 compute-1 systemd[1]: Started libpod-conmon-999dc65aba814d41152eff550d92bcb161477ff72bb71ac632c592b07b1756ff.scope.
Dec 07 09:40:42 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68075278bd060f8cf2a349e03332fdb4b31a7b9379560757a2424163b47ccc12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68075278bd060f8cf2a349e03332fdb4b31a7b9379560757a2424163b47ccc12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68075278bd060f8cf2a349e03332fdb4b31a7b9379560757a2424163b47ccc12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68075278bd060f8cf2a349e03332fdb4b31a7b9379560757a2424163b47ccc12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:42 compute-1 podman[77726]: 2025-12-07 09:40:42.448932196 +0000 UTC m=+0.105256867 container init 999dc65aba814d41152eff550d92bcb161477ff72bb71ac632c592b07b1756ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec 07 09:40:42 compute-1 podman[77726]: 2025-12-07 09:40:42.456953148 +0000 UTC m=+0.113277809 container start 999dc65aba814d41152eff550d92bcb161477ff72bb71ac632c592b07b1756ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 07 09:40:42 compute-1 podman[77726]: 2025-12-07 09:40:42.459937142 +0000 UTC m=+0.116261893 container attach 999dc65aba814d41152eff550d92bcb161477ff72bb71ac632c592b07b1756ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:40:42 compute-1 podman[77726]: 2025-12-07 09:40:42.36435625 +0000 UTC m=+0.020680921 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:42 compute-1 ceph-osd[77581]: bdev(0x5613f7a83800 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:42 compute-1 lvm[77822]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 07 09:40:42 compute-1 lvm[77822]: VG ceph_vg0 finished
Dec 07 09:40:43 compute-1 ceph-osd[77581]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 07 09:40:43 compute-1 ceph-osd[77581]: load: jerasure load: lrc 
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:43 compute-1 laughing_clarke[77743]: {}
Dec 07 09:40:43 compute-1 systemd[1]: libpod-999dc65aba814d41152eff550d92bcb161477ff72bb71ac632c592b07b1756ff.scope: Deactivated successfully.
Dec 07 09:40:43 compute-1 podman[77726]: 2025-12-07 09:40:43.102686266 +0000 UTC m=+0.759010937 container died 999dc65aba814d41152eff550d92bcb161477ff72bb71ac632c592b07b1756ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 07 09:40:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-68075278bd060f8cf2a349e03332fdb4b31a7b9379560757a2424163b47ccc12-merged.mount: Deactivated successfully.
Dec 07 09:40:43 compute-1 podman[77726]: 2025-12-07 09:40:43.14381555 +0000 UTC m=+0.800140221 container remove 999dc65aba814d41152eff550d92bcb161477ff72bb71ac632c592b07b1756ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Dec 07 09:40:43 compute-1 systemd[1]: libpod-conmon-999dc65aba814d41152eff550d92bcb161477ff72bb71ac632c592b07b1756ff.scope: Deactivated successfully.
Dec 07 09:40:43 compute-1 sudo[77618]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:43 compute-1 sudo[77840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:40:43 compute-1 sudo[77840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:43 compute-1 sudo[77840]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:43 compute-1 sudo[77869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:40:43 compute-1 sudo[77869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:43 compute-1 sudo[77869]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:43 compute-1 sudo[77894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 07 09:40:43 compute-1 sudo[77894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:43 compute-1 ceph-osd[77581]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 07 09:40:43 compute-1 ceph-osd[77581]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8928c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8929000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8929000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8929000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8929000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount shared_bdev_used = 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: RocksDB version: 7.9.2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Git sha 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: DB SUMMARY
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: DB Session ID:  CFW6B7GYHVR554CRKVPM
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: CURRENT file:  CURRENT
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: IDENTITY file:  IDENTITY
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                         Options.error_if_exists: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.create_if_missing: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                         Options.paranoid_checks: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                                     Options.env: 0x5613f88f9dc0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                                Options.info_log: 0x5613f88fd7a0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.max_file_opening_threads: 16
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                              Options.statistics: (nil)
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.use_fsync: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.max_log_file_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                         Options.allow_fallocate: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.use_direct_reads: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.create_missing_column_families: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                              Options.db_log_dir: 
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                                 Options.wal_dir: db.wal
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.advise_random_on_open: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.write_buffer_manager: 0x5613f89f4a00
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                            Options.rate_limiter: (nil)
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.unordered_write: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.row_cache: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                              Options.wal_filter: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.allow_ingest_behind: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.two_write_queues: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.manual_wal_flush: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.wal_compression: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.atomic_flush: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.log_readahead_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.allow_data_in_errors: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.db_host_id: __hostname__
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.max_background_jobs: 4
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.max_background_compactions: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.max_subcompactions: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.max_open_files: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.bytes_per_sync: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.max_background_flushes: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Compression algorithms supported:
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         kZSTD supported: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         kXpressCompression supported: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         kBZip2Compression supported: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         kLZ4Compression supported: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         kZlibCompression supported: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         kLZ4HCCompression supported: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         kSnappyCompression supported: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b189b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b189b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b189b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 91beb6f3-0d83-4455-a2a7-5e0498728013
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100443749432, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100443749607, "job": 1, "event": "recovery_finished"}
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: freelist init
Dec 07 09:40:43 compute-1 ceph-osd[77581]: freelist _read_cfg
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 07 09:40:43 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs umount
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8929000 /var/lib/ceph/osd/ceph-1/block) close
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8929000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8929000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8929000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bdev(0x5613f8929000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluefs mount shared_bdev_used = 4718592
Dec 07 09:40:43 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: RocksDB version: 7.9.2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Git sha 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: DB SUMMARY
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: DB Session ID:  CFW6B7GYHVR554CRKVPN
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: CURRENT file:  CURRENT
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: IDENTITY file:  IDENTITY
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                         Options.error_if_exists: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.create_if_missing: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                         Options.paranoid_checks: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                                     Options.env: 0x5613f8a982a0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                                Options.info_log: 0x5613f88fd940
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.max_file_opening_threads: 16
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                              Options.statistics: (nil)
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.use_fsync: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.max_log_file_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                         Options.allow_fallocate: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.use_direct_reads: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.create_missing_column_families: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                              Options.db_log_dir: 
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                                 Options.wal_dir: db.wal
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.advise_random_on_open: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.write_buffer_manager: 0x5613f89f4a00
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                            Options.rate_limiter: (nil)
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.unordered_write: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.row_cache: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                              Options.wal_filter: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.allow_ingest_behind: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.two_write_queues: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.manual_wal_flush: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.wal_compression: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.atomic_flush: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.log_readahead_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.allow_data_in_errors: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.db_host_id: __hostname__
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.max_background_jobs: 4
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.max_background_compactions: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.max_subcompactions: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.max_open_files: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.bytes_per_sync: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.max_background_flushes: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Compression algorithms supported:
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         kZSTD supported: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         kXpressCompression supported: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         kBZip2Compression supported: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         kLZ4Compression supported: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         kZlibCompression supported: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         kLZ4HCCompression supported: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         kSnappyCompression supported: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fd680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fd680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fd680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fd680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fd680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fd680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fd680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b19350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b189b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b189b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:           Options.merge_operator: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613f88fdac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613f7b189b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.write_buffer_size: 16777216
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.max_write_buffer_number: 64
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.compression: LZ4
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.num_levels: 7
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 91beb6f3-0d83-4455-a2a7-5e0498728013
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100444015825, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100444019342, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100444, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91beb6f3-0d83-4455-a2a7-5e0498728013", "db_session_id": "CFW6B7GYHVR554CRKVPN", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100444022205, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100444, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91beb6f3-0d83-4455-a2a7-5e0498728013", "db_session_id": "CFW6B7GYHVR554CRKVPN", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100444024786, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100444, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91beb6f3-0d83-4455-a2a7-5e0498728013", "db_session_id": "CFW6B7GYHVR554CRKVPN", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100444026227, "job": 1, "event": "recovery_finished"}
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5613f8afa000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: DB pointer 0x5613f8aa4000
Dec 07 09:40:44 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 07 09:40:44 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 07 09:40:44 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 09:40:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 07 09:40:44 compute-1 ceph-osd[77581]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 07 09:40:44 compute-1 ceph-osd[77581]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 07 09:40:44 compute-1 ceph-osd[77581]: _get_class not permitted to load lua
Dec 07 09:40:44 compute-1 ceph-osd[77581]: _get_class not permitted to load sdk
Dec 07 09:40:44 compute-1 ceph-osd[77581]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 07 09:40:44 compute-1 ceph-osd[77581]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 07 09:40:44 compute-1 ceph-osd[77581]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 07 09:40:44 compute-1 ceph-osd[77581]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 07 09:40:44 compute-1 ceph-osd[77581]: osd.1 0 load_pgs
Dec 07 09:40:44 compute-1 ceph-osd[77581]: osd.1 0 load_pgs opened 0 pgs
Dec 07 09:40:44 compute-1 ceph-osd[77581]: osd.1 0 log_to_monitors true
Dec 07 09:40:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1[77577]: 2025-12-07T09:40:44.054+0000 7f8ff80fe740 -1 osd.1 0 log_to_monitors true
Dec 07 09:40:44 compute-1 podman[78189]: 2025-12-07 09:40:44.090668796 +0000 UTC m=+0.082503395 container exec 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 07 09:40:44 compute-1 podman[78189]: 2025-12-07 09:40:44.212942159 +0000 UTC m=+0.204776718 container exec_died 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:40:44 compute-1 sudo[77894]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:44 compute-1 sudo[78456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:40:44 compute-1 sudo[78456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:44 compute-1 sudo[78456]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:44 compute-1 sudo[78481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -- inventory --format=json-pretty --filter-for-batch
Dec 07 09:40:44 compute-1 sudo[78481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:40:44 compute-1 podman[78547]: 2025-12-07 09:40:44.931954779 +0000 UTC m=+0.074246886 container create 3a1690215719b15f1641fd4ba0ea4bda973d8bcd3b78b731a71eed87e4b17a42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:40:44 compute-1 systemd[1]: Started libpod-conmon-3a1690215719b15f1641fd4ba0ea4bda973d8bcd3b78b731a71eed87e4b17a42.scope.
Dec 07 09:40:44 compute-1 podman[78547]: 2025-12-07 09:40:44.903173666 +0000 UTC m=+0.045465873 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:45 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:45 compute-1 podman[78547]: 2025-12-07 09:40:45.035302827 +0000 UTC m=+0.177594984 container init 3a1690215719b15f1641fd4ba0ea4bda973d8bcd3b78b731a71eed87e4b17a42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:40:45 compute-1 podman[78547]: 2025-12-07 09:40:45.043414661 +0000 UTC m=+0.185706728 container start 3a1690215719b15f1641fd4ba0ea4bda973d8bcd3b78b731a71eed87e4b17a42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_einstein, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 07 09:40:45 compute-1 podman[78547]: 2025-12-07 09:40:45.047424821 +0000 UTC m=+0.189716908 container attach 3a1690215719b15f1641fd4ba0ea4bda973d8bcd3b78b731a71eed87e4b17a42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_einstein, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 09:40:45 compute-1 crazy_einstein[78563]: 167 167
Dec 07 09:40:45 compute-1 systemd[1]: libpod-3a1690215719b15f1641fd4ba0ea4bda973d8bcd3b78b731a71eed87e4b17a42.scope: Deactivated successfully.
Dec 07 09:40:45 compute-1 podman[78547]: 2025-12-07 09:40:45.048725974 +0000 UTC m=+0.191018081 container died 3a1690215719b15f1641fd4ba0ea4bda973d8bcd3b78b731a71eed87e4b17a42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Dec 07 09:40:45 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 07 09:40:45 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 07 09:40:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-ea64a688281b569b6439b80b195991d23a254ec8d935af9d8e287e1446124814-merged.mount: Deactivated successfully.
Dec 07 09:40:45 compute-1 podman[78547]: 2025-12-07 09:40:45.106873416 +0000 UTC m=+0.249165483 container remove 3a1690215719b15f1641fd4ba0ea4bda973d8bcd3b78b731a71eed87e4b17a42 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 07 09:40:45 compute-1 systemd[1]: libpod-conmon-3a1690215719b15f1641fd4ba0ea4bda973d8bcd3b78b731a71eed87e4b17a42.scope: Deactivated successfully.
Dec 07 09:40:45 compute-1 podman[78587]: 2025-12-07 09:40:45.324892904 +0000 UTC m=+0.068469351 container create 03e9f18168d19a2763575683fb89be8ed774cd1c33f5886a9521c560e5040f2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hellman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Dec 07 09:40:45 compute-1 systemd[1]: Started libpod-conmon-03e9f18168d19a2763575683fb89be8ed774cd1c33f5886a9521c560e5040f2d.scope.
Dec 07 09:40:45 compute-1 podman[78587]: 2025-12-07 09:40:45.29369839 +0000 UTC m=+0.037274897 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:40:45 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:40:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3d89b091805bb1031a15d2074cff6510403fb9c8679dd4262bde3db7b74fc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3d89b091805bb1031a15d2074cff6510403fb9c8679dd4262bde3db7b74fc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3d89b091805bb1031a15d2074cff6510403fb9c8679dd4262bde3db7b74fc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3d89b091805bb1031a15d2074cff6510403fb9c8679dd4262bde3db7b74fc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 07 09:40:45 compute-1 podman[78587]: 2025-12-07 09:40:45.436754246 +0000 UTC m=+0.180330673 container init 03e9f18168d19a2763575683fb89be8ed774cd1c33f5886a9521c560e5040f2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hellman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 07 09:40:45 compute-1 podman[78587]: 2025-12-07 09:40:45.451677941 +0000 UTC m=+0.195254358 container start 03e9f18168d19a2763575683fb89be8ed774cd1c33f5886a9521c560e5040f2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hellman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 09:40:45 compute-1 podman[78587]: 2025-12-07 09:40:45.458183224 +0000 UTC m=+0.201759641 container attach 03e9f18168d19a2763575683fb89be8ed774cd1c33f5886a9521c560e5040f2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 07 09:40:46 compute-1 serene_hellman[78603]: [
Dec 07 09:40:46 compute-1 serene_hellman[78603]:     {
Dec 07 09:40:46 compute-1 serene_hellman[78603]:         "available": false,
Dec 07 09:40:46 compute-1 serene_hellman[78603]:         "being_replaced": false,
Dec 07 09:40:46 compute-1 serene_hellman[78603]:         "ceph_device_lvm": false,
Dec 07 09:40:46 compute-1 serene_hellman[78603]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:         "lsm_data": {},
Dec 07 09:40:46 compute-1 serene_hellman[78603]:         "lvs": [],
Dec 07 09:40:46 compute-1 serene_hellman[78603]:         "path": "/dev/sr0",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:         "rejected_reasons": [
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "Has a FileSystem",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "Insufficient space (<5GB)"
Dec 07 09:40:46 compute-1 serene_hellman[78603]:         ],
Dec 07 09:40:46 compute-1 serene_hellman[78603]:         "sys_api": {
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "actuators": null,
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "device_nodes": [
Dec 07 09:40:46 compute-1 serene_hellman[78603]:                 "sr0"
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             ],
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "devname": "sr0",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "human_readable_size": "482.00 KB",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "id_bus": "ata",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "model": "QEMU DVD-ROM",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "nr_requests": "2",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "parent": "/dev/sr0",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "partitions": {},
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "path": "/dev/sr0",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "removable": "1",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "rev": "2.5+",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "ro": "0",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "rotational": "1",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "sas_address": "",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "sas_device_handle": "",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "scheduler_mode": "mq-deadline",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "sectors": 0,
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "sectorsize": "2048",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "size": 493568.0,
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "support_discard": "2048",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "type": "disk",
Dec 07 09:40:46 compute-1 serene_hellman[78603]:             "vendor": "QEMU"
Dec 07 09:40:46 compute-1 serene_hellman[78603]:         }
Dec 07 09:40:46 compute-1 serene_hellman[78603]:     }
Dec 07 09:40:46 compute-1 serene_hellman[78603]: ]
Dec 07 09:40:46 compute-1 systemd[1]: libpod-03e9f18168d19a2763575683fb89be8ed774cd1c33f5886a9521c560e5040f2d.scope: Deactivated successfully.
Dec 07 09:40:46 compute-1 ceph-osd[77581]: osd.1 0 done with init, starting boot process
Dec 07 09:40:46 compute-1 ceph-osd[77581]: osd.1 0 start_boot
Dec 07 09:40:46 compute-1 ceph-osd[77581]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 07 09:40:46 compute-1 ceph-osd[77581]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 07 09:40:46 compute-1 ceph-osd[77581]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 07 09:40:46 compute-1 ceph-osd[77581]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 07 09:40:46 compute-1 ceph-osd[77581]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 07 09:40:46 compute-1 podman[79741]: 2025-12-07 09:40:46.125088985 +0000 UTC m=+0.023536212 container died 03e9f18168d19a2763575683fb89be8ed774cd1c33f5886a9521c560e5040f2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Dec 07 09:40:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-9b3d89b091805bb1031a15d2074cff6510403fb9c8679dd4262bde3db7b74fc8-merged.mount: Deactivated successfully.
Dec 07 09:40:46 compute-1 podman[79741]: 2025-12-07 09:40:46.454015142 +0000 UTC m=+0.352462319 container remove 03e9f18168d19a2763575683fb89be8ed774cd1c33f5886a9521c560e5040f2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_hellman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:40:46 compute-1 systemd[1]: libpod-conmon-03e9f18168d19a2763575683fb89be8ed774cd1c33f5886a9521c560e5040f2d.scope: Deactivated successfully.
Dec 07 09:40:46 compute-1 sudo[78481]: pam_unix(sudo:session): session closed for user root
Dec 07 09:40:59 compute-1 ceph-osd[77581]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 9.765 iops: 2499.859 elapsed_sec: 1.200
Dec 07 09:40:59 compute-1 ceph-osd[77581]: log_channel(cluster) log [WRN] : OSD bench result of 2499.858795 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 07 09:40:59 compute-1 ceph-osd[77581]: osd.1 0 waiting for initial osdmap
Dec 07 09:40:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1[77577]: 2025-12-07T09:40:59.302+0000 7f8ff4081640 -1 osd.1 0 waiting for initial osdmap
Dec 07 09:40:59 compute-1 ceph-osd[77581]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 07 09:40:59 compute-1 ceph-osd[77581]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 07 09:40:59 compute-1 ceph-osd[77581]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 07 09:40:59 compute-1 ceph-osd[77581]: osd.1 11 check_osdmap_features require_osd_release unknown -> squid
Dec 07 09:40:59 compute-1 ceph-osd[77581]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 07 09:40:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-osd-1[77577]: 2025-12-07T09:40:59.404+0000 7f8fef6a9640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 07 09:40:59 compute-1 ceph-osd[77581]: osd.1 11 set_numa_affinity not setting numa affinity
Dec 07 09:40:59 compute-1 ceph-osd[77581]: osd.1 11 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 07 09:41:00 compute-1 ceph-osd[77581]: osd.1 12 state: booting -> active
Dec 07 09:41:00 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:01 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:06 compute-1 sshd-session[79760]: banner exchange: Connection from 3.137.73.221 port 51640: invalid format
Dec 07 09:41:09 compute-1 sshd-session[79761]: banner exchange: Connection from 3.137.73.221 port 51654: invalid format
Dec 07 09:41:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 15 pg[2.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:12 compute-1 sudo[79762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:41:12 compute-1 sudo[79762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:41:12 compute-1 sudo[79762]: pam_unix(sudo:session): session closed for user root
Dec 07 09:41:12 compute-1 sudo[79787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:41:12 compute-1 sudo[79787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:41:12 compute-1 podman[79850]: 2025-12-07 09:41:12.560912546 +0000 UTC m=+0.057884066 container create a22b10ea6a04816d602016cc3e7ee1feb2a4c559fbd9121a31638ebd1ef31309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 09:41:12 compute-1 systemd[1]: Started libpod-conmon-a22b10ea6a04816d602016cc3e7ee1feb2a4c559fbd9121a31638ebd1ef31309.scope.
Dec 07 09:41:12 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:41:12 compute-1 podman[79850]: 2025-12-07 09:41:12.531287842 +0000 UTC m=+0.028259382 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:41:12 compute-1 podman[79850]: 2025-12-07 09:41:12.636878046 +0000 UTC m=+0.133849596 container init a22b10ea6a04816d602016cc3e7ee1feb2a4c559fbd9121a31638ebd1ef31309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mirzakhani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 07 09:41:12 compute-1 podman[79850]: 2025-12-07 09:41:12.645230375 +0000 UTC m=+0.142201895 container start a22b10ea6a04816d602016cc3e7ee1feb2a4c559fbd9121a31638ebd1ef31309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 07 09:41:12 compute-1 podman[79850]: 2025-12-07 09:41:12.648445556 +0000 UTC m=+0.145417076 container attach a22b10ea6a04816d602016cc3e7ee1feb2a4c559fbd9121a31638ebd1ef31309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 07 09:41:12 compute-1 blissful_mirzakhani[79866]: 167 167
Dec 07 09:41:12 compute-1 systemd[1]: libpod-a22b10ea6a04816d602016cc3e7ee1feb2a4c559fbd9121a31638ebd1ef31309.scope: Deactivated successfully.
Dec 07 09:41:12 compute-1 podman[79850]: 2025-12-07 09:41:12.650465507 +0000 UTC m=+0.147437027 container died a22b10ea6a04816d602016cc3e7ee1feb2a4c559fbd9121a31638ebd1ef31309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:41:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-d41732c2a6f10a3b10c5e84f72be409997d0ba9197806eacbf3f06496231a05f-merged.mount: Deactivated successfully.
Dec 07 09:41:12 compute-1 podman[79850]: 2025-12-07 09:41:12.687499068 +0000 UTC m=+0.184470588 container remove a22b10ea6a04816d602016cc3e7ee1feb2a4c559fbd9121a31638ebd1ef31309 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mirzakhani, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Dec 07 09:41:12 compute-1 systemd[1]: libpod-conmon-a22b10ea6a04816d602016cc3e7ee1feb2a4c559fbd9121a31638ebd1ef31309.scope: Deactivated successfully.
Dec 07 09:41:12 compute-1 podman[79882]: 2025-12-07 09:41:12.749565127 +0000 UTC m=+0.039699148 container create e43aa654295e3bbaf0163febd9111443a53c459e5a7d30837f50fd32f6c1d622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_haibt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:41:12 compute-1 systemd[1]: Started libpod-conmon-e43aa654295e3bbaf0163febd9111443a53c459e5a7d30837f50fd32f6c1d622.scope.
Dec 07 09:41:12 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:41:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/810c18320962efb57160598107776670a7bf70d48b8a06f0c42d508fa3a1a80f/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:41:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/810c18320962efb57160598107776670a7bf70d48b8a06f0c42d508fa3a1a80f/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Dec 07 09:41:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/810c18320962efb57160598107776670a7bf70d48b8a06f0c42d508fa3a1a80f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:41:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/810c18320962efb57160598107776670a7bf70d48b8a06f0c42d508fa3a1a80f/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Dec 07 09:41:12 compute-1 podman[79882]: 2025-12-07 09:41:12.810699924 +0000 UTC m=+0.100833965 container init e43aa654295e3bbaf0163febd9111443a53c459e5a7d30837f50fd32f6c1d622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_haibt, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:41:12 compute-1 podman[79882]: 2025-12-07 09:41:12.815513455 +0000 UTC m=+0.105647476 container start e43aa654295e3bbaf0163febd9111443a53c459e5a7d30837f50fd32f6c1d622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_haibt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 07 09:41:12 compute-1 podman[79882]: 2025-12-07 09:41:12.818925031 +0000 UTC m=+0.109059072 container attach e43aa654295e3bbaf0163febd9111443a53c459e5a7d30837f50fd32f6c1d622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 07 09:41:12 compute-1 podman[79882]: 2025-12-07 09:41:12.730700793 +0000 UTC m=+0.020834834 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:41:12 compute-1 systemd[1]: libpod-e43aa654295e3bbaf0163febd9111443a53c459e5a7d30837f50fd32f6c1d622.scope: Deactivated successfully.
Dec 07 09:41:12 compute-1 podman[79882]: 2025-12-07 09:41:12.892140931 +0000 UTC m=+0.182274952 container died e43aa654295e3bbaf0163febd9111443a53c459e5a7d30837f50fd32f6c1d622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_haibt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 09:41:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-810c18320962efb57160598107776670a7bf70d48b8a06f0c42d508fa3a1a80f-merged.mount: Deactivated successfully.
Dec 07 09:41:12 compute-1 podman[79882]: 2025-12-07 09:41:12.927168171 +0000 UTC m=+0.217302202 container remove e43aa654295e3bbaf0163febd9111443a53c459e5a7d30837f50fd32f6c1d622 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 07 09:41:12 compute-1 systemd[1]: libpod-conmon-e43aa654295e3bbaf0163febd9111443a53c459e5a7d30837f50fd32f6c1d622.scope: Deactivated successfully.
Dec 07 09:41:12 compute-1 systemd[1]: Reloading.
Dec 07 09:41:13 compute-1 systemd-rc-local-generator[79961]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:41:13 compute-1 systemd-sysv-generator[79965]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:41:13 compute-1 systemd[1]: Reloading.
Dec 07 09:41:13 compute-1 systemd-sysv-generator[80006]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:41:13 compute-1 systemd-rc-local-generator[80002]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:41:13 compute-1 systemd[1]: Starting Ceph mon.compute-1 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:41:13 compute-1 podman[80058]: 2025-12-07 09:41:13.63819777 +0000 UTC m=+0.039957925 container create e0f72a5bcd8eb9f209116459e5584cdf3cc46f1e44e6fbfdce2e9c6f682161e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:41:13 compute-1 podman[80058]: 2025-12-07 09:41:13.620066465 +0000 UTC m=+0.021826650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:41:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7456c967c1baf3c35f21123d2b7e522d7873d5cec83798bb180d3baded631de2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:41:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7456c967c1baf3c35f21123d2b7e522d7873d5cec83798bb180d3baded631de2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:41:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7456c967c1baf3c35f21123d2b7e522d7873d5cec83798bb180d3baded631de2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 07 09:41:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7456c967c1baf3c35f21123d2b7e522d7873d5cec83798bb180d3baded631de2/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Dec 07 09:41:13 compute-1 podman[80058]: 2025-12-07 09:41:13.753308123 +0000 UTC m=+0.155068298 container init e0f72a5bcd8eb9f209116459e5584cdf3cc46f1e44e6fbfdce2e9c6f682161e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:41:13 compute-1 podman[80058]: 2025-12-07 09:41:13.758031842 +0000 UTC m=+0.159791987 container start e0f72a5bcd8eb9f209116459e5584cdf3cc46f1e44e6fbfdce2e9c6f682161e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mon-compute-1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 07 09:41:13 compute-1 bash[80058]: e0f72a5bcd8eb9f209116459e5584cdf3cc46f1e44e6fbfdce2e9c6f682161e7
Dec 07 09:41:13 compute-1 systemd[1]: Started Ceph mon.compute-1 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:41:13 compute-1 ceph-mon[80077]: set uid:gid to 167:167 (ceph:ceph)
Dec 07 09:41:13 compute-1 ceph-mon[80077]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 07 09:41:13 compute-1 ceph-mon[80077]: pidfile_write: ignore empty --pid-file
Dec 07 09:41:13 compute-1 ceph-mon[80077]: load: jerasure load: lrc 
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: RocksDB version: 7.9.2
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Git sha 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: DB SUMMARY
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: DB Session ID:  JE48258Z8V09XG5T5TD7
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: CURRENT file:  CURRENT
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: IDENTITY file:  IDENTITY
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                         Options.error_if_exists: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                       Options.create_if_missing: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                         Options.paranoid_checks: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                                     Options.env: 0x5563157aac20
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                                Options.info_log: 0x5563169b9a20
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                Options.max_file_opening_threads: 16
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                              Options.statistics: (nil)
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                               Options.use_fsync: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                       Options.max_log_file_size: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                         Options.allow_fallocate: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                        Options.use_direct_reads: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:          Options.create_missing_column_families: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                              Options.db_log_dir: 
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                                 Options.wal_dir: 
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                   Options.advise_random_on_open: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                    Options.write_buffer_manager: 0x5563169bd900
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                            Options.rate_limiter: (nil)
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                  Options.unordered_write: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                               Options.row_cache: None
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                              Options.wal_filter: None
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.allow_ingest_behind: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.two_write_queues: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.manual_wal_flush: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.wal_compression: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.atomic_flush: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                 Options.log_readahead_size: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.allow_data_in_errors: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.db_host_id: __hostname__
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.max_background_jobs: 2
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.max_background_compactions: -1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.max_subcompactions: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.max_total_wal_size: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                          Options.max_open_files: -1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                          Options.bytes_per_sync: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:       Options.compaction_readahead_size: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                  Options.max_background_flushes: -1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Compression algorithms supported:
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         kZSTD supported: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         kXpressCompression supported: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         kBZip2Compression supported: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         kLZ4Compression supported: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         kZlibCompression supported: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         kLZ4HCCompression supported: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         kSnappyCompression supported: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:           Options.merge_operator: 
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:        Options.compaction_filter: None
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:        Options.compaction_filter_factory: None
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:  Options.sst_partitioner_factory: None
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5563169b85c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5563169dd350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:        Options.write_buffer_size: 33554432
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:  Options.max_write_buffer_number: 2
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:          Options.compression: NoCompression
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:       Options.prefix_extractor: nullptr
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.num_levels: 7
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                  Options.compression_opts.level: 32767
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:               Options.compression_opts.strategy: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                  Options.compression_opts.enabled: false
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                        Options.arena_block_size: 1048576
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                Options.disable_auto_compactions: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                   Options.inplace_update_support: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                           Options.bloom_locality: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                    Options.max_successive_merges: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                Options.paranoid_file_checks: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                Options.force_consistency_checks: 1
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                Options.report_bg_io_stats: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                               Options.ttl: 2592000
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                       Options.enable_blob_files: false
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                           Options.min_blob_size: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                          Options.blob_file_size: 268435456
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb:                Options.blob_file_starting_level: 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 19b7cd17-892b-4642-8771-311739802c4a
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100473800019, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100473801694, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100473801812, "job": 1, "event": "recovery_finished"}
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 07 09:41:13 compute-1 sudo[79787]: pam_unix(sudo:session): session closed for user root
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5563169dee00
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: DB pointer 0x556316ae8000
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 09:41:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563169dd350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 07 09:41:13 compute-1 ceph-mon[80077]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Dec 07 09:41:13 compute-1 ceph-mon[80077]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Dec 07 09:41:13 compute-1 ceph-mon[80077]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:41:13 compute-1 ceph-mon[80077]: mon.compute-1@-1(???) e0 preinit fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).mds e1 new map
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).mds e1 print_map
                                           e1
                                           btime 2025-12-07T09:39:07:860179+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 1 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e15 crush map has features 3314933000852226048, adjusting msgr requires
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e15 crush map has features 288514051259236352, adjusting msgr requires
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e15 crush map has features 288514051259236352, adjusting msgr requires
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).osd e15 crush map has features 288514051259236352, adjusting msgr requires
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Updating compute-1:/etc/ceph/ceph.conf
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Deploying daemon crash.compute-1 on compute-1
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2468099184' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2468099184' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "32dc95f1-8dbf-4ad2-8ecd-93489439352c"}]': finished
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osdmap e4: 1 total, 0 up, 1 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1839153269' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "24b45d5b-5e40-4ac8-980f-eccc62ab0425"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1839153269' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "24b45d5b-5e40-4ac8-980f-eccc62ab0425"}]': finished
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osdmap e5: 2 total, 0 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3885774653' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/796051432' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Deploying daemon osd.0 on compute-0
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Deploying daemon osd.1 on compute-1
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4258034374' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osdmap e6: 2 total, 0 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osdmap e7: 2 total, 0 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: purged_snaps scrub starts
Dec 07 09:41:14 compute-1 ceph-mon[80077]: purged_snaps scrub ok
Dec 07 09:41:14 compute-1 ceph-mon[80077]: purged_snaps scrub starts
Dec 07 09:41:14 compute-1 ceph-mon[80077]: purged_snaps scrub ok
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osdmap e8: 2 total, 0 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Adjusting osd_memory_target on compute-1 to  5247M
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: OSD bench result of 717.937482 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osd.0 [v2:192.168.122.100:6802/194844255,v1:192.168.122.100:6803/194844255] boot
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osdmap e9: 2 total, 1 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osdmap e10: 2 total, 1 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osdmap e11: 2 total, 1 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: OSD bench result of 2499.858795 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v50: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osd.1 [v2:192.168.122.101:6800/1470232336,v1:192.168.122.101:6801/1470232336] boot
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osdmap e12: 2 total, 2 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osdmap e13: 2 total, 2 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v53: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: osdmap e14: 2 total, 2 up, 2 in
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mgrmap e9: compute-0.dotugk(active, since 94s)
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v55: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Updating compute-2:/etc/ceph/ceph.conf
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v56: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Deploying daemon mon.compute-2 on compute-2
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/9648655' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 07 09:41:14 compute-1 ceph-mon[80077]: Cluster is now healthy
Dec 07 09:41:14 compute-1 ceph-mon[80077]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec 07 09:41:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2220788821' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 07 09:41:14 compute-1 ceph-mon[80077]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Dec 07 09:41:17 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 16 pg[2.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:20 compute-1 ceph-mon[80077]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Dec 07 09:41:20 compute-1 ceph-mon[80077]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Dec 07 09:41:20 compute-1 ceph-mon[80077]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 07 09:41:20 compute-1 ceph-mon[80077]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 07 09:41:23 compute-1 ceph-mon[80077]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 07 09:41:23 compute-1 ceph-mon[80077]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 07 09:41:23 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 07 09:41:23 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec 07 09:41:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Dec 07 09:41:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e16 e16: 2 total, 2 up, 2 in
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: mon.compute-0 calling monitor election
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: pgmap v61: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: mon.compute-2 calling monitor election
Dec 07 09:41:25 compute-1 ceph-mon[80077]: pgmap v62: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: pgmap v63: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 07 09:41:25 compute-1 ceph-mon[80077]: monmap epoch 2
Dec 07 09:41:25 compute-1 ceph-mon[80077]: fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:41:25 compute-1 ceph-mon[80077]: last_changed 2025-12-07T09:41:12.124181+0000
Dec 07 09:41:25 compute-1 ceph-mon[80077]: created 2025-12-07T09:39:05.386379+0000
Dec 07 09:41:25 compute-1 ceph-mon[80077]: min_mon_release 19 (squid)
Dec 07 09:41:25 compute-1 ceph-mon[80077]: election_strategy: 1
Dec 07 09:41:25 compute-1 ceph-mon[80077]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 07 09:41:25 compute-1 ceph-mon[80077]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 07 09:41:25 compute-1 ceph-mon[80077]: fsmap 
Dec 07 09:41:25 compute-1 ceph-mon[80077]: osdmap e15: 2 total, 2 up, 2 in
Dec 07 09:41:25 compute-1 ceph-mon[80077]: mgrmap e9: compute-0.dotugk(active, since 109s)
Dec 07 09:41:25 compute-1 ceph-mon[80077]: overall HEALTH_OK
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ntknug", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ntknug", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 07 09:41:25 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 09:41:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Dec 07 09:41:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 07 09:41:25 compute-1 ceph-mon[80077]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Dec 07 09:41:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e17 e17: 2 total, 2 up, 2 in
Dec 07 09:41:27 compute-1 ceph-mon[80077]: Deploying daemon mgr.compute-2.ntknug on compute-2
Dec 07 09:41:27 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 07 09:41:27 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:27 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 07 09:41:27 compute-1 ceph-mon[80077]: mon.compute-0 calling monitor election
Dec 07 09:41:27 compute-1 ceph-mon[80077]: mon.compute-2 calling monitor election
Dec 07 09:41:27 compute-1 ceph-mon[80077]: pgmap v65: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:27 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:27 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:27 compute-1 ceph-mon[80077]: pgmap v66: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:27 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:27 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:27 compute-1 ceph-mon[80077]: pgmap v67: 2 pgs: 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:27 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:27 compute-1 ceph-mon[80077]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 07 09:41:27 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:27 compute-1 ceph-mon[80077]: monmap epoch 3
Dec 07 09:41:27 compute-1 ceph-mon[80077]: fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:41:27 compute-1 ceph-mon[80077]: last_changed 2025-12-07T09:41:18.042048+0000
Dec 07 09:41:27 compute-1 ceph-mon[80077]: created 2025-12-07T09:39:05.386379+0000
Dec 07 09:41:27 compute-1 ceph-mon[80077]: min_mon_release 19 (squid)
Dec 07 09:41:27 compute-1 ceph-mon[80077]: election_strategy: 1
Dec 07 09:41:27 compute-1 ceph-mon[80077]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 07 09:41:27 compute-1 ceph-mon[80077]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 07 09:41:27 compute-1 ceph-mon[80077]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 07 09:41:27 compute-1 ceph-mon[80077]: fsmap 
Dec 07 09:41:27 compute-1 ceph-mon[80077]: osdmap e16: 2 total, 2 up, 2 in
Dec 07 09:41:27 compute-1 ceph-mon[80077]: mgrmap e9: compute-0.dotugk(active, since 116s)
Dec 07 09:41:27 compute-1 ceph-mon[80077]: Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled
Dec 07 09:41:27 compute-1 ceph-mon[80077]: [WRN] POOL_APP_NOT_ENABLED: 1 pool(s) do not have an application enabled
Dec 07 09:41:27 compute-1 ceph-mon[80077]:     application not enabled on pool 'vms'
Dec 07 09:41:27 compute-1 ceph-mon[80077]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec 07 09:41:27 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/490793873' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 07 09:41:28 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e18 e18: 2 total, 2 up, 2 in
Dec 07 09:41:28 compute-1 ceph-mon[80077]: mon.compute-1 calling monitor election
Dec 07 09:41:28 compute-1 ceph-mon[80077]: pgmap v68: 2 pgs: 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:28 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:28 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:28 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:28 compute-1 ceph-mon[80077]: pgmap v69: 2 pgs: 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:28 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:41:28 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/490793873' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 07 09:41:28 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:28 compute-1 ceph-mon[80077]: osdmap e17: 2 total, 2 up, 2 in
Dec 07 09:41:28 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 07 09:41:28 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:28 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:28 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.buauyv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 07 09:41:28 compute-1 sudo[80117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:41:28 compute-1 sudo[80117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:41:28 compute-1 sudo[80117]: pam_unix(sudo:session): session closed for user root
Dec 07 09:41:28 compute-1 sudo[80142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:41:28 compute-1 sudo[80142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:41:29 compute-1 podman[80206]: 2025-12-07 09:41:29.311031078 +0000 UTC m=+0.063061362 container create 7be6bbf8c043152797a83d9ba8fe7bd8dda9a7b1a6b86f6cc3eb6f2e53600eae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_tharp, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:41:29 compute-1 podman[80206]: 2025-12-07 09:41:29.266767444 +0000 UTC m=+0.018797748 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:41:29 compute-1 systemd[1]: Started libpod-conmon-7be6bbf8c043152797a83d9ba8fe7bd8dda9a7b1a6b86f6cc3eb6f2e53600eae.scope.
Dec 07 09:41:29 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:41:29 compute-1 podman[80206]: 2025-12-07 09:41:29.420369845 +0000 UTC m=+0.172400229 container init 7be6bbf8c043152797a83d9ba8fe7bd8dda9a7b1a6b86f6cc3eb6f2e53600eae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_tharp, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:41:29 compute-1 podman[80206]: 2025-12-07 09:41:29.429723097 +0000 UTC m=+0.181753401 container start 7be6bbf8c043152797a83d9ba8fe7bd8dda9a7b1a6b86f6cc3eb6f2e53600eae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_tharp, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 09:41:29 compute-1 podman[80206]: 2025-12-07 09:41:29.433973172 +0000 UTC m=+0.186003546 container attach 7be6bbf8c043152797a83d9ba8fe7bd8dda9a7b1a6b86f6cc3eb6f2e53600eae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 07 09:41:29 compute-1 beautiful_tharp[80222]: 167 167
Dec 07 09:41:29 compute-1 systemd[1]: libpod-7be6bbf8c043152797a83d9ba8fe7bd8dda9a7b1a6b86f6cc3eb6f2e53600eae.scope: Deactivated successfully.
Dec 07 09:41:29 compute-1 podman[80206]: 2025-12-07 09:41:29.439053839 +0000 UTC m=+0.191084133 container died 7be6bbf8c043152797a83d9ba8fe7bd8dda9a7b1a6b86f6cc3eb6f2e53600eae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_tharp, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 07 09:41:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-7f6806abecc68694cb64b1fe2820faf0cb4773e4db6eec9ab0d0f255cd8eebe3-merged.mount: Deactivated successfully.
Dec 07 09:41:29 compute-1 podman[80206]: 2025-12-07 09:41:29.47991375 +0000 UTC m=+0.231944034 container remove 7be6bbf8c043152797a83d9ba8fe7bd8dda9a7b1a6b86f6cc3eb6f2e53600eae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_tharp, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 07 09:41:29 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e18 _set_new_cache_sizes cache_size:1019942901 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:41:29 compute-1 systemd[1]: libpod-conmon-7be6bbf8c043152797a83d9ba8fe7bd8dda9a7b1a6b86f6cc3eb6f2e53600eae.scope: Deactivated successfully.
Dec 07 09:41:29 compute-1 systemd[1]: Reloading.
Dec 07 09:41:29 compute-1 systemd-rc-local-generator[80259]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:41:29 compute-1 systemd-sysv-generator[80265]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:41:29 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e19 e19: 2 total, 2 up, 2 in
Dec 07 09:41:29 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.buauyv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 07 09:41:29 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 09:41:29 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:29 compute-1 ceph-mon[80077]: Deploying daemon mgr.compute-1.buauyv on compute-1
Dec 07 09:41:29 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 07 09:41:29 compute-1 ceph-mon[80077]: osdmap e18: 2 total, 2 up, 2 in
Dec 07 09:41:29 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 07 09:41:29 compute-1 ceph-mon[80077]: pgmap v72: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:29 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 07 09:41:29 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/831807626' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 07 09:41:29 compute-1 systemd[1]: Reloading.
Dec 07 09:41:29 compute-1 systemd-rc-local-generator[80302]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:41:29 compute-1 systemd-sysv-generator[80308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:41:29 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 19 pg[2.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=19 pruub=11.572468758s) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active pruub 57.496944427s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:29 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 19 pg[2.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=19 pruub=11.572468758s) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown pruub 57.496944427s@ mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 systemd[1]: Starting Ceph mgr.compute-1.buauyv for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:41:30 compute-1 podman[80363]: 2025-12-07 09:41:30.228902179 +0000 UTC m=+0.045871528 container create 786fc7fc7d21bec38d3a8918ababf2b63577f00c1d3f201d23a2585eab8e0ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 09:41:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c6ce9a4f41fc2bc83334ff24d5c53b6c959be41a3c02f77e93b1941caf0c00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:41:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c6ce9a4f41fc2bc83334ff24d5c53b6c959be41a3c02f77e93b1941caf0c00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:41:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c6ce9a4f41fc2bc83334ff24d5c53b6c959be41a3c02f77e93b1941caf0c00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 07 09:41:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c6ce9a4f41fc2bc83334ff24d5c53b6c959be41a3c02f77e93b1941caf0c00/merged/var/lib/ceph/mgr/ceph-compute-1.buauyv supports timestamps until 2038 (0x7fffffff)
Dec 07 09:41:30 compute-1 podman[80363]: 2025-12-07 09:41:30.285873074 +0000 UTC m=+0.102842483 container init 786fc7fc7d21bec38d3a8918ababf2b63577f00c1d3f201d23a2585eab8e0ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 07 09:41:30 compute-1 podman[80363]: 2025-12-07 09:41:30.291184178 +0000 UTC m=+0.108153517 container start 786fc7fc7d21bec38d3a8918ababf2b63577f00c1d3f201d23a2585eab8e0ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 09:41:30 compute-1 bash[80363]: 786fc7fc7d21bec38d3a8918ababf2b63577f00c1d3f201d23a2585eab8e0ee8
Dec 07 09:41:30 compute-1 podman[80363]: 2025-12-07 09:41:30.205315993 +0000 UTC m=+0.022285322 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:41:30 compute-1 systemd[1]: Started Ceph mgr.compute-1.buauyv for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:41:30 compute-1 sudo[80142]: pam_unix(sudo:session): session closed for user root
Dec 07 09:41:30 compute-1 ceph-mgr[80383]: set uid:gid to 167:167 (ceph:ceph)
Dec 07 09:41:30 compute-1 ceph-mgr[80383]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 07 09:41:30 compute-1 ceph-mgr[80383]: pidfile_write: ignore empty --pid-file
Dec 07 09:41:30 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'alerts'
Dec 07 09:41:30 compute-1 ceph-mgr[80383]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 07 09:41:30 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'balancer'
Dec 07 09:41:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:30.473+0000 7f4c75fbc140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 07 09:41:30 compute-1 ceph-mgr[80383]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 07 09:41:30 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'cephadm'
Dec 07 09:41:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:30.564+0000 7f4c75fbc140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 07 09:41:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e20 e20: 2 total, 2 up, 2 in
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1f( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1e( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1d( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.b( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.a( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.9( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.8( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.7( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.6( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.5( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.4( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.2( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.3( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.c( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.e( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.f( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.d( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.10( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.11( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.12( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.13( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.14( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.15( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.17( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.16( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.18( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.19( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1a( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1b( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1c( empty local-lis/les=15/16 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1f( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-mon[80077]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 07 09:41:30 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 07 09:41:30 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 07 09:41:30 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/831807626' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 07 09:41:30 compute-1 ceph-mon[80077]: osdmap e19: 2 total, 2 up, 2 in
Dec 07 09:41:30 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:30 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:30 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:30 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:30 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 07 09:41:30 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 07 09:41:30 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:30 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2061751046' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 07 09:41:30 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2061751046' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 07 09:41:30 compute-1 ceph-mon[80077]: osdmap e20: 2 total, 2 up, 2 in
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1d( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.a( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1e( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.b( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.9( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.8( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.5( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.6( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.7( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.4( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.2( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.3( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.e( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.d( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.f( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.c( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.10( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.12( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.11( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.15( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.13( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.14( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.17( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.16( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.18( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1a( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.19( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1b( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:30 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 20 pg[2.1c( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=15/15 les/c/f=16/16/0 sis=19) [1] r=0 lpr=19 pi=[15,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:31 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 07 09:41:31 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 07 09:41:31 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'crash'
Dec 07 09:41:31 compute-1 ceph-mgr[80383]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 07 09:41:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:31.363+0000 7f4c75fbc140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 07 09:41:31 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'dashboard'
Dec 07 09:41:31 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e21 e21: 2 total, 2 up, 2 in
Dec 07 09:41:31 compute-1 ceph-mon[80077]: Deploying daemon crash.compute-2 on compute-2
Dec 07 09:41:31 compute-1 ceph-mon[80077]: pgmap v75: 36 pgs: 34 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:31 compute-1 ceph-mon[80077]: 2.1f scrub starts
Dec 07 09:41:31 compute-1 ceph-mon[80077]: 2.1f scrub ok
Dec 07 09:41:31 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:31 compute-1 ceph-mon[80077]: Standby manager daemon compute-2.ntknug started
Dec 07 09:41:31 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1239247002' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 07 09:41:31 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'devicehealth'
Dec 07 09:41:32 compute-1 ceph-mgr[80383]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 07 09:41:32 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'diskprediction_local'
Dec 07 09:41:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:32.004+0000 7f4c75fbc140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 07 09:41:32 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 07 09:41:32 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 07 09:41:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 07 09:41:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 07 09:41:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]:   from numpy import show_config as show_numpy_config
Dec 07 09:41:32 compute-1 ceph-mgr[80383]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 07 09:41:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:32.169+0000 7f4c75fbc140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 07 09:41:32 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'influx'
Dec 07 09:41:32 compute-1 ceph-mgr[80383]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 07 09:41:32 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'insights'
Dec 07 09:41:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:32.237+0000 7f4c75fbc140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 07 09:41:32 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'iostat'
Dec 07 09:41:32 compute-1 ceph-mgr[80383]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 07 09:41:32 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'k8sevents'
Dec 07 09:41:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:32.372+0000 7f4c75fbc140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 07 09:41:32 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'localpool'
Dec 07 09:41:32 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'mds_autoscaler'
Dec 07 09:41:33 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 07 09:41:33 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'mirroring'
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'nfs'
Dec 07 09:41:33 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e22 e22: 2 total, 2 up, 2 in
Dec 07 09:41:33 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:33 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1239247002' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 07 09:41:33 compute-1 ceph-mon[80077]: osdmap e21: 2 total, 2 up, 2 in
Dec 07 09:41:33 compute-1 ceph-mon[80077]: 2.1d scrub starts
Dec 07 09:41:33 compute-1 ceph-mon[80077]: mgrmap e10: compute-0.dotugk(active, since 2m), standbys: compute-2.ntknug
Dec 07 09:41:33 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ntknug", "id": "compute-2.ntknug"}]: dispatch
Dec 07 09:41:33 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:33 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:33 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:33 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:33 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:41:33 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:41:33 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:33 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:41:33 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:33 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3140489880' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 07 09:41:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:33.359+0000 7f4c75fbc140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'orchestrator'
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 07 09:41:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:33.581+0000 7f4c75fbc140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'osd_perf_query'
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 07 09:41:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:33.659+0000 7f4c75fbc140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'osd_support'
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 07 09:41:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:33.726+0000 7f4c75fbc140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'pg_autoscaler'
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'progress'
Dec 07 09:41:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:33.809+0000 7f4c75fbc140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 07 09:41:33 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'prometheus'
Dec 07 09:41:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:33.883+0000 7f4c75fbc140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 07 09:41:34 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 07 09:41:34 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 07 09:41:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e23 e23: 2 total, 2 up, 2 in
Dec 07 09:41:34 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:34 compute-1 ceph-mon[80077]: 2.1d scrub ok
Dec 07 09:41:34 compute-1 ceph-mon[80077]: pgmap v77: 37 pgs: 1 creating+peering, 36 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:34 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 07 09:41:34 compute-1 ceph-mon[80077]: 2.a scrub starts
Dec 07 09:41:34 compute-1 ceph-mon[80077]: 2.a scrub ok
Dec 07 09:41:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3140489880' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 07 09:41:34 compute-1 ceph-mon[80077]: osdmap e22: 2 total, 2 up, 2 in
Dec 07 09:41:34 compute-1 ceph-mon[80077]: 2.1e scrub starts
Dec 07 09:41:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1573311624' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 07 09:41:34 compute-1 ceph-mon[80077]: 2.1e scrub ok
Dec 07 09:41:34 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 07 09:41:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1573311624' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 07 09:41:34 compute-1 ceph-mon[80077]: osdmap e23: 2 total, 2 up, 2 in
Dec 07 09:41:34 compute-1 ceph-mgr[80383]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 07 09:41:34 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rbd_support'
Dec 07 09:41:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:34.261+0000 7f4c75fbc140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 07 09:41:34 compute-1 ceph-mgr[80383]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 07 09:41:34 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'restful'
Dec 07 09:41:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:34.358+0000 7f4c75fbc140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 07 09:41:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e23 _set_new_cache_sizes cache_size:1020053344 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:41:34 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rgw'
Dec 07 09:41:34 compute-1 sshd-session[80415]: banner exchange: Connection from 3.137.73.221 port 54476: invalid format
Dec 07 09:41:34 compute-1 ceph-mgr[80383]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 07 09:41:34 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rook'
Dec 07 09:41:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:34.782+0000 7f4c75fbc140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Dec 07 09:41:35 compute-1 sshd-session[80416]: banner exchange: Connection from 3.137.73.221 port 54492: invalid format
Dec 07 09:41:35 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 07 09:41:35 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 07 09:41:35 compute-1 ceph-mon[80077]: pgmap v80: 69 pgs: 32 unknown, 1 creating+peering, 36 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:35 compute-1 ceph-mon[80077]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0c34425f-bd69-4050-94f0-696d2e70c759"}]: dispatch
Dec 07 09:41:35 compute-1 ceph-mon[80077]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0c34425f-bd69-4050-94f0-696d2e70c759"}]': finished
Dec 07 09:41:35 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2408157544' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0c34425f-bd69-4050-94f0-696d2e70c759"}]: dispatch
Dec 07 09:41:35 compute-1 ceph-mon[80077]: osdmap e24: 3 total, 2 up, 3 in
Dec 07 09:41:35 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:41:35 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3615880316' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 07 09:41:35 compute-1 ceph-mon[80077]: 2.1c scrub starts
Dec 07 09:41:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:35.361+0000 7f4c75fbc140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'selftest'
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'snap_schedule'
Dec 07 09:41:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:35.436+0000 7f4c75fbc140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:35.515+0000 7f4c75fbc140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'stats'
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'status'
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:35.656+0000 7f4c75fbc140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'telegraf'
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'telemetry'
Dec 07 09:41:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:35.726+0000 7f4c75fbc140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:35.888+0000 7f4c75fbc140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 07 09:41:35 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'test_orchestrator'
Dec 07 09:41:36 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Dec 07 09:41:36 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 07 09:41:36 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 07 09:41:36 compute-1 ceph-mgr[80383]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 07 09:41:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:36.116+0000 7f4c75fbc140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 07 09:41:36 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'volumes'
Dec 07 09:41:36 compute-1 ceph-mon[80077]: 2.1c scrub ok
Dec 07 09:41:36 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1478001259' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 07 09:41:36 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3615880316' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 07 09:41:36 compute-1 ceph-mon[80077]: osdmap e25: 3 total, 2 up, 3 in
Dec 07 09:41:36 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:41:36 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:36 compute-1 ceph-mgr[80383]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 07 09:41:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:36.395+0000 7f4c75fbc140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 07 09:41:36 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'zabbix'
Dec 07 09:41:36 compute-1 ceph-mgr[80383]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 07 09:41:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:41:36.467+0000 7f4c75fbc140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 07 09:41:36 compute-1 ceph-mgr[80383]: ms_deliver_dispatch: unhandled message 0x5622f0650d00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 07 09:41:37 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 07 09:41:37 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 07 09:41:38 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 07 09:41:38 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 07 09:41:38 compute-1 ceph-mon[80077]: 2.b scrub starts
Dec 07 09:41:38 compute-1 ceph-mon[80077]: 2.b scrub ok
Dec 07 09:41:38 compute-1 ceph-mon[80077]: Standby manager daemon compute-1.buauyv started
Dec 07 09:41:38 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:38 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:38 compute-1 ceph-mon[80077]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 07 09:41:38 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2197147500' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 07 09:41:38 compute-1 ceph-mon[80077]: 3.1b scrub starts
Dec 07 09:41:38 compute-1 ceph-mon[80077]: 3.1b scrub ok
Dec 07 09:41:38 compute-1 ceph-mon[80077]: pgmap v83: 69 pgs: 31 unknown, 1 creating+peering, 37 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:39 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 07 09:41:39 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 07 09:41:39 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Dec 07 09:41:39 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e26 _set_new_cache_sizes cache_size:1020054713 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:41:40 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 07 09:41:40 compute-1 ceph-mon[80077]: 2.8 scrub starts
Dec 07 09:41:40 compute-1 ceph-mon[80077]: 2.8 scrub ok
Dec 07 09:41:40 compute-1 ceph-mon[80077]: 3.8 scrub starts
Dec 07 09:41:40 compute-1 ceph-mon[80077]: 3.8 scrub ok
Dec 07 09:41:40 compute-1 ceph-mon[80077]: 2.6 scrub starts
Dec 07 09:41:40 compute-1 ceph-mon[80077]: 2.6 scrub ok
Dec 07 09:41:40 compute-1 ceph-mon[80077]: 3.1c scrub starts
Dec 07 09:41:40 compute-1 ceph-mon[80077]: 3.1c scrub ok
Dec 07 09:41:40 compute-1 ceph-mon[80077]: pgmap v84: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:40 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 07 09:41:40 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 07 09:41:40 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2197147500' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 07 09:41:40 compute-1 ceph-mon[80077]: mgrmap e11: compute-0.dotugk(active, since 2m), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:41:40 compute-1 ceph-mon[80077]: osdmap e26: 3 total, 2 up, 3 in
Dec 07 09:41:40 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:41:40 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.buauyv", "id": "compute-1.buauyv"}]: dispatch
Dec 07 09:41:40 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 07 09:41:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.1a( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.16( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.14( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.15( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.13( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.10( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.11( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.e( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.f( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.c( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.d( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.3( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.5( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.9( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.a( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.1d( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[3.1c( empty local-lis/les=0/0 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.1f( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.471537590s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.713996887s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.1f( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.471510887s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.713996887s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.a( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479578972s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.722122192s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.a( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479563713s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.722122192s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.1e( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479604721s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.722190857s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.9( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479772568s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.722373962s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.1e( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479570389s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.722190857s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.6( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479813576s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.722480774s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.9( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479742050s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.722373962s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.6( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479798317s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.722480774s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.1( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479975700s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.722763062s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.4( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479890823s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.722679138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.1( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479960442s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.722763062s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.4( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479879379s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.722679138s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.d( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479836464s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.722778320s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.c( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479870796s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.722816467s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.10( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479882240s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.722839355s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.d( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479825974s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.722778320s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.e( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479790688s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.722770691s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.c( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479850769s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.722816467s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.10( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479846954s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.722839355s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.e( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479775429s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.722770691s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.13( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479795456s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.722862244s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.13( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479785919s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.722862244s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.15( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479729652s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.722854614s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.15( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479691505s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.722854614s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.19( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479840279s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.723106384s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.19( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479829788s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.723106384s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.1b( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479821205s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 70.723106384s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 27 pg[2.1b( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=14.479804993s) [0] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.723106384s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:41 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 07 09:41:41 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 07 09:41:41 compute-1 ceph-mon[80077]: 2.7 scrub starts
Dec 07 09:41:41 compute-1 ceph-mon[80077]: 2.7 scrub ok
Dec 07 09:41:41 compute-1 ceph-mon[80077]: 3.1f scrub starts
Dec 07 09:41:41 compute-1 ceph-mon[80077]: 3.1f scrub ok
Dec 07 09:41:41 compute-1 ceph-mon[80077]: 2.5 scrub starts
Dec 07 09:41:41 compute-1 ceph-mon[80077]: 2.5 scrub ok
Dec 07 09:41:41 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 07 09:41:41 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 07 09:41:41 compute-1 ceph-mon[80077]: osdmap e27: 3 total, 2 up, 3 in
Dec 07 09:41:41 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:41:41 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1301740075' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 07 09:41:41 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 07 09:41:41 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:41:41 compute-1 ceph-mon[80077]: Deploying daemon osd.2 on compute-2
Dec 07 09:41:41 compute-1 ceph-mon[80077]: pgmap v87: 69 pgs: 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:42 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 07 09:41:42 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 07 09:41:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Dec 07 09:41:43 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 07 09:41:43 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 07 09:41:44 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 07 09:41:46 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:41:46 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 07 09:41:46 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 07 09:41:46 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 07 09:41:47 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 07 09:41:48 compute-1 ceph-mon[80077]: 3.1e deep-scrub starts
Dec 07 09:41:48 compute-1 ceph-mon[80077]: 3.1e deep-scrub ok
Dec 07 09:41:48 compute-1 ceph-mon[80077]: 2.2 scrub starts
Dec 07 09:41:48 compute-1 ceph-mon[80077]: 2.2 scrub ok
Dec 07 09:41:48 compute-1 ceph-mon[80077]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.14( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.13( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.d( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.10( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.1c( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.1a( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=23/23 les/c/f=25/25/0 sis=27) [1] r=0 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:41:48 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 07 09:41:48 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 07 09:41:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:41:51 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 07 09:41:51 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 07 09:41:51 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 07 09:41:51 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 07 09:41:51 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 07 09:41:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.4 scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.4 scrub ok
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 2.0 scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 2.0 scrub ok
Dec 07 09:41:52 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1301740075' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.2 scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: osdmap e28: 3 total, 2 up, 3 in
Dec 07 09:41:52 compute-1 ceph-mon[80077]: pgmap v89: 69 pgs: 32 peering, 37 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 2.3 scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.2 scrub ok
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.1 scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 2.3 scrub ok
Dec 07 09:41:52 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 2.11 scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.1 scrub ok
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.6 scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: pgmap v90: 69 pgs: 32 peering, 37 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.6 scrub ok
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.7 scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 2.12 scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 2.11 scrub ok
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.7 scrub ok
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.0 scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 2.12 scrub ok
Dec 07 09:41:52 compute-1 ceph-mon[80077]: pgmap v91: 69 pgs: 2 active+clean+scrubbing, 17 activating, 15 peering, 35 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 2.14 scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.0 scrub ok
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 3.b scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 2.16 scrub starts
Dec 07 09:41:52 compute-1 ceph-mon[80077]: 2.14 scrub ok
Dec 07 09:41:52 compute-1 ceph-mon[80077]: from='osd.2 [v2:192.168.122.102:6800/97309485,v1:192.168.122.102:6801/97309485]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 07 09:41:52 compute-1 ceph-mon[80077]: pgmap v92: 69 pgs: 3 active+clean+scrubbing, 17 activating, 49 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:52 compute-1 ceph-mon[80077]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 07 09:41:52 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 07 09:41:52 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 07 09:41:53 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 07 09:41:56 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 07 09:41:56 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 07 09:41:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:41:56 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 07 09:41:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Dec 07 09:41:56 compute-1 ceph-mon[80077]: purged_snaps scrub starts
Dec 07 09:41:56 compute-1 ceph-mon[80077]: purged_snaps scrub ok
Dec 07 09:41:56 compute-1 ceph-mon[80077]: 3.b scrub ok
Dec 07 09:41:56 compute-1 ceph-mon[80077]: 3.12 scrub starts
Dec 07 09:41:56 compute-1 ceph-mon[80077]: 3.17 scrub starts
Dec 07 09:41:56 compute-1 ceph-mon[80077]: 3.12 scrub ok
Dec 07 09:41:56 compute-1 ceph-mon[80077]: pgmap v93: 69 pgs: 3 active+clean+scrubbing, 17 activating, 49 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:56 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:56 compute-1 ceph-mon[80077]: 3.17 scrub ok
Dec 07 09:41:56 compute-1 ceph-mon[80077]: 2.17 scrub starts
Dec 07 09:41:56 compute-1 ceph-mon[80077]: 2.f scrub starts
Dec 07 09:41:56 compute-1 ceph-mon[80077]: 2.16 scrub ok
Dec 07 09:41:56 compute-1 ceph-mon[80077]: 2.f scrub ok
Dec 07 09:41:56 compute-1 ceph-mon[80077]: 2.17 scrub ok
Dec 07 09:41:56 compute-1 ceph-mon[80077]: 3.18 scrub starts
Dec 07 09:41:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1040799493' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 07 09:41:56 compute-1 ceph-mon[80077]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 07 09:41:56 compute-1 ceph-mon[80077]: from='osd.2 [v2:192.168.122.102:6800/97309485,v1:192.168.122.102:6801/97309485]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 07 09:41:56 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:56 compute-1 ceph-mon[80077]: osdmap e29: 3 total, 2 up, 3 in
Dec 07 09:41:56 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:41:56 compute-1 ceph-mon[80077]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 07 09:41:56 compute-1 ceph-mon[80077]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 07 09:41:57 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 07 09:41:57 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 07 09:41:58 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.b( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.111677170s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active pruub 86.722740173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[3.1d( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=13.892339706s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 88.503433228s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.1d( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.111441612s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active pruub 86.722557068s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.b( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.111677170s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.722740173s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.1c( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.112393379s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active pruub 86.723556519s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.1c( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.112393379s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.723556519s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[3.1d( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=13.892339706s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503433228s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.1d( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.111441612s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.722557068s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[3.9( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=13.892090797s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 88.503433228s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[3.9( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=13.892090797s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503433228s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.5( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.111295700s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active pruub 86.722763062s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.5( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.111295700s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.722763062s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[3.e( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=13.891806602s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 88.503479004s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.f( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.111537933s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active pruub 86.723251343s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[3.e( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=13.891806602s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503479004s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[3.11( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=13.891650200s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 88.503387451s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.f( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.111537933s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.723251343s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[3.11( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=13.891650200s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503387451s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.12( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.283382416s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active pruub 86.895195007s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[3.15( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=13.891605377s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 88.503456116s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.12( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=12.283382416s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.895195007s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[3.15( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=13.891605377s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503456116s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.18( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=13.555977821s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 active pruub 88.167991638s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[3.1a( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=13.891430855s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 active pruub 88.503471375s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[2.18( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=30 pruub=13.555977821s) [] r=-1 lpr=30 pi=[19,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.167991638s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 30 pg[3.1a( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=30 pruub=13.891430855s) [] r=-1 lpr=30 pi=[27,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503471375s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:41:58 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 07 09:41:59 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.c deep-scrub starts
Dec 07 09:41:59 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.c deep-scrub ok
Dec 07 09:41:59 compute-1 ceph-mon[80077]: 2.1a scrub starts
Dec 07 09:41:59 compute-1 ceph-mon[80077]: 2.1a scrub ok
Dec 07 09:41:59 compute-1 ceph-mon[80077]: pgmap v95: 69 pgs: 3 active+clean+scrubbing, 66 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:59 compute-1 ceph-mon[80077]: 2.18 scrub starts
Dec 07 09:41:59 compute-1 ceph-mon[80077]: pgmap v96: 69 pgs: 3 active+clean+scrubbing, 66 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:59 compute-1 ceph-mon[80077]: 3.14 scrub starts
Dec 07 09:41:59 compute-1 ceph-mon[80077]: 2.18 scrub ok
Dec 07 09:41:59 compute-1 ceph-mon[80077]: 3.18 scrub ok
Dec 07 09:41:59 compute-1 ceph-mon[80077]: 3.19 scrub starts
Dec 07 09:41:59 compute-1 ceph-mon[80077]: 2.9 scrub starts
Dec 07 09:41:59 compute-1 ceph-mon[80077]: 3.14 scrub ok
Dec 07 09:41:59 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:41:59 compute-1 ceph-mon[80077]: 3.19 scrub ok
Dec 07 09:41:59 compute-1 ceph-mon[80077]: 2.9 scrub ok
Dec 07 09:41:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1040799493' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 07 09:41:59 compute-1 ceph-mon[80077]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 07 09:41:59 compute-1 ceph-mon[80077]: osdmap e30: 3 total, 2 up, 3 in
Dec 07 09:41:59 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:41:59 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:41:59 compute-1 ceph-mon[80077]: pgmap v98: 69 pgs: 3 active+clean+scrubbing, 66 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:41:59 compute-1 sudo[80417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:41:59 compute-1 sudo[80417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:41:59 compute-1 sudo[80417]: pam_unix(sudo:session): session closed for user root
Dec 07 09:41:59 compute-1 sudo[80442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:41:59 compute-1 sudo[80442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:41:59 compute-1 sudo[80442]: pam_unix(sudo:session): session closed for user root
Dec 07 09:41:59 compute-1 sudo[80467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 07 09:41:59 compute-1 sudo[80467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:00 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 07 09:42:00 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 07 09:42:01 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 07 09:42:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:42:02 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 07 09:42:02 compute-1 ceph-mon[80077]: 3.13 scrub starts
Dec 07 09:42:02 compute-1 ceph-mon[80077]: 3.13 scrub ok
Dec 07 09:42:02 compute-1 ceph-mon[80077]: 2.4 scrub starts
Dec 07 09:42:02 compute-1 ceph-mon[80077]: 2.4 scrub ok
Dec 07 09:42:02 compute-1 ceph-mon[80077]: 3.f scrub starts
Dec 07 09:42:02 compute-1 ceph-mon[80077]: 3.f scrub ok
Dec 07 09:42:02 compute-1 ceph-mon[80077]: 2.1 scrub starts
Dec 07 09:42:02 compute-1 ceph-mon[80077]: pgmap v99: 69 pgs: 3 active+clean+scrubbing, 66 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:42:02 compute-1 ceph-mon[80077]: 3.c deep-scrub starts
Dec 07 09:42:02 compute-1 ceph-mon[80077]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 07 09:42:02 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:42:02 compute-1 ceph-mon[80077]: 3.c deep-scrub ok
Dec 07 09:42:02 compute-1 ceph-mon[80077]: 2.1 scrub ok
Dec 07 09:42:02 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:02 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:02 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:42:02 compute-1 ceph-mon[80077]: 2.e deep-scrub starts
Dec 07 09:42:02 compute-1 ceph-mon[80077]: 2.e deep-scrub ok
Dec 07 09:42:02 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4151627428' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 07 09:42:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e31 e31: 3 total, 3 up, 3 in
Dec 07 09:42:02 compute-1 podman[80563]: 2025-12-07 09:42:02.24185978 +0000 UTC m=+1.837157951 container exec 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.1c( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.519000053s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.723556519s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.1d( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.517975807s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.722557068s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.1d( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.517959595s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.722557068s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.1c( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.518963814s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.723556519s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.b( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.517963409s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.722740173s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[3.9( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.298641205s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503433228s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[3.9( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.298622131s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503433228s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.b( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.517932892s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.722740173s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.5( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.517904282s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.722763062s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.5( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.517859459s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.722763062s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.f( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.518310547s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.723251343s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[3.e( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.298540115s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503479004s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.f( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.518287659s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.723251343s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[3.e( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.298510551s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503479004s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[3.11( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.298400879s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503387451s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[3.11( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.298384666s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503387451s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[3.1d( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.298913002s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503433228s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[3.15( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.298370361s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503456116s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[3.1d( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.298348427s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503433228s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[3.15( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.298353195s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503456116s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.18( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=9.962821960s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.167991638s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[3.1a( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.298274040s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503471375s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.18( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=9.962807655s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.167991638s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[3.1a( empty local-lis/les=27/28 n=0 ec=23/17 lis/c=27/27 les/c/f=28/28/0 sis=31 pruub=10.298254967s) [2] r=-1 lpr=31 pi=[27,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.503471375s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.12( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.689875603s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.895195007s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:42:02 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 31 pg[2.12( empty local-lis/les=19/20 n=0 ec=19/15 lis/c=19/19 les/c/f=20/20/0 sis=31 pruub=8.689862251s) [2] r=-1 lpr=31 pi=[19,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.895195007s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:42:02 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Dec 07 09:42:02 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Dec 07 09:42:02 compute-1 podman[80563]: 2025-12-07 09:42:02.333199706 +0000 UTC m=+1.928497847 container exec_died 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 07 09:42:02 compute-1 sudo[80467]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:02 compute-1 sudo[80648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:42:02 compute-1 sudo[80648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:02 compute-1 sudo[80648]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:02 compute-1 sudo[80673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:42:02 compute-1 sudo[80673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:03 compute-1 ceph-mon[80077]: 3.d scrub starts
Dec 07 09:42:03 compute-1 ceph-mon[80077]: 3.d scrub ok
Dec 07 09:42:03 compute-1 ceph-mon[80077]: OSD bench result of 6088.782708 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 07 09:42:03 compute-1 ceph-mon[80077]: 2.19 scrub starts
Dec 07 09:42:03 compute-1 ceph-mon[80077]: pgmap v100: 69 pgs: 3 active+clean+scrubbing, 66 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 07 09:42:03 compute-1 ceph-mon[80077]: 3.3 scrub starts
Dec 07 09:42:03 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:42:03 compute-1 ceph-mon[80077]: 2.19 scrub ok
Dec 07 09:42:03 compute-1 ceph-mon[80077]: 3.3 scrub ok
Dec 07 09:42:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4151627428' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 07 09:42:03 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:03 compute-1 ceph-mon[80077]: osd.2 [v2:192.168.122.102:6800/97309485,v1:192.168.122.102:6801/97309485] boot
Dec 07 09:42:03 compute-1 ceph-mon[80077]: osdmap e31: 3 total, 3 up, 3 in
Dec 07 09:42:03 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:42:03 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:03 compute-1 ceph-mon[80077]: 3.5 deep-scrub starts
Dec 07 09:42:03 compute-1 ceph-mon[80077]: 3.5 deep-scrub ok
Dec 07 09:42:03 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:03 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:03 compute-1 ceph-mon[80077]: pgmap v102: 69 pgs: 24 peering, 2 active+clean+scrubbing, 43 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:03 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 07 09:42:03 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 07 09:42:03 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e32 e32: 3 total, 3 up, 3 in
Dec 07 09:42:03 compute-1 sudo[80673]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:04 compute-1 ceph-mon[80077]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 07 09:42:04 compute-1 ceph-mon[80077]: Cluster is now healthy
Dec 07 09:42:04 compute-1 ceph-mon[80077]: 3.10 scrub starts
Dec 07 09:42:04 compute-1 ceph-mon[80077]: 3.10 scrub ok
Dec 07 09:42:04 compute-1 ceph-mon[80077]: osdmap e32: 3 total, 3 up, 3 in
Dec 07 09:42:04 compute-1 ceph-mon[80077]: 3.1a scrub starts
Dec 07 09:42:04 compute-1 ceph-mon[80077]: 3.1a scrub ok
Dec 07 09:42:04 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Dec 07 09:42:04 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Dec 07 09:42:05 compute-1 sudo[80729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 07 09:42:05 compute-1 sudo[80729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:05 compute-1 sudo[80729]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:05 compute-1 ceph-mon[80077]: 3.a deep-scrub starts
Dec 07 09:42:05 compute-1 ceph-mon[80077]: 3.a deep-scrub ok
Dec 07 09:42:05 compute-1 ceph-mon[80077]: 3.e scrub starts
Dec 07 09:42:05 compute-1 ceph-mon[80077]: 3.e scrub ok
Dec 07 09:42:05 compute-1 ceph-mon[80077]: pgmap v104: 69 pgs: 24 peering, 2 active+clean+scrubbing, 43 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:05 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:05 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:05 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 07 09:42:05 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:05 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:42:05 compute-1 sudo[80754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph
Dec 07 09:42:05 compute-1 sudo[80754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:05 compute-1 sudo[80754]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:05 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 07 09:42:05 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 07 09:42:05 compute-1 sudo[80779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:42:05 compute-1 sudo[80779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:05 compute-1 sudo[80779]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:05 compute-1 sudo[80804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:05 compute-1 sudo[80804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:05 compute-1 sudo[80804]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:05 compute-1 sudo[80829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:42:05 compute-1 sudo[80829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:05 compute-1 sudo[80829]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:05 compute-1 sudo[80877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:42:05 compute-1 sudo[80877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:05 compute-1 sudo[80877]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:05 compute-1 sudo[80902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:42:05 compute-1 sudo[80902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:05 compute-1 sudo[80902]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:05 compute-1 sudo[80927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 07 09:42:05 compute-1 sudo[80927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:05 compute-1 sudo[80927]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:05 compute-1 sudo[80952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:42:05 compute-1 sudo[80952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:05 compute-1 sudo[80952]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:05 compute-1 sudo[80977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:42:05 compute-1 sudo[80977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:05 compute-1 sudo[80977]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:05 compute-1 sudo[81002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:42:05 compute-1 sudo[81002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:05 compute-1 sudo[81002]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:05 compute-1 sudo[81027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:05 compute-1 sudo[81027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:05 compute-1 sudo[81027]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:06 compute-1 sudo[81052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:42:06 compute-1 sudo[81052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:06 compute-1 sudo[81052]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:06 compute-1 sudo[81100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:42:06 compute-1 sudo[81100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:06 compute-1 sudo[81100]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:06 compute-1 sudo[81125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:42:06 compute-1 sudo[81125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:06 compute-1 ceph-mon[80077]: Adjusting osd_memory_target on compute-2 to 128.0M
Dec 07 09:42:06 compute-1 ceph-mon[80077]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 07 09:42:06 compute-1 ceph-mon[80077]: Updating compute-0:/etc/ceph/ceph.conf
Dec 07 09:42:06 compute-1 ceph-mon[80077]: Updating compute-1:/etc/ceph/ceph.conf
Dec 07 09:42:06 compute-1 sudo[81125]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:06 compute-1 ceph-mon[80077]: Updating compute-2:/etc/ceph/ceph.conf
Dec 07 09:42:06 compute-1 ceph-mon[80077]: 3.16 scrub starts
Dec 07 09:42:06 compute-1 ceph-mon[80077]: 3.16 scrub ok
Dec 07 09:42:06 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2148721283' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 07 09:42:06 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2148721283' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 07 09:42:06 compute-1 ceph-mon[80077]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:42:06 compute-1 ceph-mon[80077]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:42:06 compute-1 ceph-mon[80077]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:42:06 compute-1 ceph-mon[80077]: 3.11 scrub starts
Dec 07 09:42:06 compute-1 ceph-mon[80077]: 3.11 scrub ok
Dec 07 09:42:06 compute-1 sudo[81150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:42:06 compute-1 sudo[81150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:06 compute-1 sudo[81150]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:42:08 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:08 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:08 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/698108247' entity='client.admin' 
Dec 07 09:42:08 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:08 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:08 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:08 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:08 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:08 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:42:08 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:42:08 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:08 compute-1 ceph-mon[80077]: pgmap v105: 69 pgs: 24 peering, 2 active+clean+scrubbing, 43 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:08 compute-1 ceph-mon[80077]: 3.15 scrub starts
Dec 07 09:42:08 compute-1 ceph-mon[80077]: 3.15 scrub ok
Dec 07 09:42:09 compute-1 ceph-mon[80077]: from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 09:42:09 compute-1 ceph-mon[80077]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 07 09:42:09 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:09 compute-1 ceph-mon[80077]: Saving service ingress.rgw.default spec with placement count:2
Dec 07 09:42:09 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:11 compute-1 ceph-mon[80077]: 3.9 scrub starts
Dec 07 09:42:11 compute-1 ceph-mon[80077]: 3.9 scrub ok
Dec 07 09:42:11 compute-1 ceph-mon[80077]: pgmap v106: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:11 compute-1 ceph-mon[80077]: 2.15 deep-scrub starts
Dec 07 09:42:11 compute-1 ceph-mon[80077]: 2.15 deep-scrub ok
Dec 07 09:42:11 compute-1 ceph-mon[80077]: 2.c deep-scrub starts
Dec 07 09:42:11 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:11 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:11 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:11 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:42:12 compute-1 ceph-mon[80077]: 2.c deep-scrub ok
Dec 07 09:42:12 compute-1 ceph-mon[80077]: from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 09:42:12 compute-1 ceph-mon[80077]: Saving service node-exporter spec with placement *
Dec 07 09:42:12 compute-1 ceph-mon[80077]: Saving service grafana spec with placement compute-0;count:1
Dec 07 09:42:12 compute-1 ceph-mon[80077]: Saving service prometheus spec with placement compute-0;count:1
Dec 07 09:42:12 compute-1 ceph-mon[80077]: Saving service alertmanager spec with placement compute-0;count:1
Dec 07 09:42:12 compute-1 ceph-mon[80077]: pgmap v107: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:12 compute-1 ceph-mon[80077]: 2.d scrub starts
Dec 07 09:42:12 compute-1 ceph-mon[80077]: 2.d scrub ok
Dec 07 09:42:12 compute-1 ceph-mon[80077]: 3.1d scrub starts
Dec 07 09:42:12 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1876920365' entity='client.admin' 
Dec 07 09:42:13 compute-1 ceph-mon[80077]: 3.1d scrub ok
Dec 07 09:42:13 compute-1 ceph-mon[80077]: pgmap v108: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:13 compute-1 ceph-mon[80077]: 2.13 scrub starts
Dec 07 09:42:13 compute-1 ceph-mon[80077]: 2.13 scrub ok
Dec 07 09:42:14 compute-1 ceph-mon[80077]: 2.1b scrub starts
Dec 07 09:42:14 compute-1 ceph-mon[80077]: 2.1b scrub ok
Dec 07 09:42:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2339250577' entity='client.admin' 
Dec 07 09:42:15 compute-1 systemd[72563]: Starting Mark boot as successful...
Dec 07 09:42:15 compute-1 systemd[72563]: Finished Mark boot as successful.
Dec 07 09:42:15 compute-1 sudo[81176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:42:15 compute-1 sudo[81176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:15 compute-1 sudo[81176]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:16 compute-1 ceph-mon[80077]: pgmap v109: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1229875527' entity='client.admin' 
Dec 07 09:42:16 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:16 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:17 compute-1 ceph-mon[80077]: 2.10 scrub starts
Dec 07 09:42:17 compute-1 ceph-mon[80077]: 2.10 scrub ok
Dec 07 09:42:17 compute-1 ceph-mon[80077]: Reconfiguring mon.compute-0 (monmap changed)...
Dec 07 09:42:17 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 07 09:42:17 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 07 09:42:17 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:17 compute-1 ceph-mon[80077]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 07 09:42:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/405494054' entity='client.admin' 
Dec 07 09:42:17 compute-1 ceph-mon[80077]: pgmap v110: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:17 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:17 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:17 compute-1 ceph-mon[80077]: Reconfiguring mgr.compute-0.dotugk (monmap changed)...
Dec 07 09:42:17 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dotugk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 07 09:42:17 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 09:42:17 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:17 compute-1 ceph-mon[80077]: Reconfiguring daemon mgr.compute-0.dotugk on compute-0
Dec 07 09:42:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:42:17 compute-1 sudo[81224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmzdwwwiryhjtcfozvqruqnjteklnghp ; /usr/bin/python3'
Dec 07 09:42:17 compute-1 sudo[81224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:42:17 compute-1 python3[81226]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:42:17 compute-1 sudo[81224]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:18 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:18 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:18 compute-1 ceph-mon[80077]: Reconfiguring crash.compute-0 (monmap changed)...
Dec 07 09:42:18 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 07 09:42:18 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:18 compute-1 ceph-mon[80077]: Reconfiguring daemon crash.compute-0 on compute-0
Dec 07 09:42:18 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:18 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:18 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 07 09:42:18 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3877033432' entity='client.admin' 
Dec 07 09:42:19 compute-1 sudo[81242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:42:19 compute-1 sudo[81242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:19 compute-1 sudo[81242]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:19 compute-1 sudo[81267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:19 compute-1 sudo[81267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:19 compute-1 podman[81308]: 2025-12-07 09:42:19.636116231 +0000 UTC m=+0.052613178 container create 66754523d2ed55305468a927c2f520fe617bd91a3e5c9073a641b23ba59f9484 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lovelace, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 09:42:19 compute-1 systemd[1]: Started libpod-conmon-66754523d2ed55305468a927c2f520fe617bd91a3e5c9073a641b23ba59f9484.scope.
Dec 07 09:42:19 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:42:19 compute-1 podman[81308]: 2025-12-07 09:42:19.610349647 +0000 UTC m=+0.026846684 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:42:19 compute-1 podman[81308]: 2025-12-07 09:42:19.716887418 +0000 UTC m=+0.133384385 container init 66754523d2ed55305468a927c2f520fe617bd91a3e5c9073a641b23ba59f9484 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 07 09:42:19 compute-1 podman[81308]: 2025-12-07 09:42:19.724360163 +0000 UTC m=+0.140857130 container start 66754523d2ed55305468a927c2f520fe617bd91a3e5c9073a641b23ba59f9484 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lovelace, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 07 09:42:19 compute-1 podman[81308]: 2025-12-07 09:42:19.729436422 +0000 UTC m=+0.145933399 container attach 66754523d2ed55305468a927c2f520fe617bd91a3e5c9073a641b23ba59f9484 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lovelace, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:42:19 compute-1 flamboyant_lovelace[81324]: 167 167
Dec 07 09:42:19 compute-1 systemd[1]: libpod-66754523d2ed55305468a927c2f520fe617bd91a3e5c9073a641b23ba59f9484.scope: Deactivated successfully.
Dec 07 09:42:19 compute-1 podman[81308]: 2025-12-07 09:42:19.73049855 +0000 UTC m=+0.146995497 container died 66754523d2ed55305468a927c2f520fe617bd91a3e5c9073a641b23ba59f9484 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lovelace, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 07 09:42:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-f76997a496b5e5376e595751f1309b68bac331119a42be3996b1972b1a1e23d6-merged.mount: Deactivated successfully.
Dec 07 09:42:20 compute-1 ceph-mon[80077]: Reconfiguring osd.0 (monmap changed)...
Dec 07 09:42:20 compute-1 ceph-mon[80077]: Reconfiguring daemon osd.0 on compute-0
Dec 07 09:42:20 compute-1 ceph-mon[80077]: pgmap v111: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:20 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:20 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:20 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 07 09:42:20 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:20 compute-1 podman[81308]: 2025-12-07 09:42:20.224161569 +0000 UTC m=+0.640658556 container remove 66754523d2ed55305468a927c2f520fe617bd91a3e5c9073a641b23ba59f9484 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lovelace, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:42:20 compute-1 sudo[81267]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:20 compute-1 systemd[1]: libpod-conmon-66754523d2ed55305468a927c2f520fe617bd91a3e5c9073a641b23ba59f9484.scope: Deactivated successfully.
Dec 07 09:42:20 compute-1 sudo[81340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:42:20 compute-1 sudo[81340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:20 compute-1 sudo[81340]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:20 compute-1 sudo[81365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:20 compute-1 sudo[81365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:20 compute-1 podman[81405]: 2025-12-07 09:42:20.783725329 +0000 UTC m=+0.041675449 container create cf1cb7f9b7facc3116b4a2b5a51ff4b857f9259f6734d3188acfaf31f0c9e863 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_jemison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 07 09:42:20 compute-1 systemd[1]: Started libpod-conmon-cf1cb7f9b7facc3116b4a2b5a51ff4b857f9259f6734d3188acfaf31f0c9e863.scope.
Dec 07 09:42:20 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:42:20 compute-1 podman[81405]: 2025-12-07 09:42:20.838154636 +0000 UTC m=+0.096104786 container init cf1cb7f9b7facc3116b4a2b5a51ff4b857f9259f6734d3188acfaf31f0c9e863 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_jemison, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 07 09:42:20 compute-1 podman[81405]: 2025-12-07 09:42:20.846683459 +0000 UTC m=+0.104633619 container start cf1cb7f9b7facc3116b4a2b5a51ff4b857f9259f6734d3188acfaf31f0c9e863 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_jemison, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 07 09:42:20 compute-1 wonderful_jemison[81421]: 167 167
Dec 07 09:42:20 compute-1 systemd[1]: libpod-cf1cb7f9b7facc3116b4a2b5a51ff4b857f9259f6734d3188acfaf31f0c9e863.scope: Deactivated successfully.
Dec 07 09:42:20 compute-1 podman[81405]: 2025-12-07 09:42:20.852415896 +0000 UTC m=+0.110366036 container attach cf1cb7f9b7facc3116b4a2b5a51ff4b857f9259f6734d3188acfaf31f0c9e863 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_jemison, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 07 09:42:20 compute-1 podman[81405]: 2025-12-07 09:42:20.854045961 +0000 UTC m=+0.111996121 container died cf1cb7f9b7facc3116b4a2b5a51ff4b857f9259f6734d3188acfaf31f0c9e863 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_jemison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:42:20 compute-1 podman[81405]: 2025-12-07 09:42:20.768047561 +0000 UTC m=+0.025997701 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:42:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-24f27c0f7ba5779bf05767c1eeddc0788955a8f71e64ce83c90c23b411ccdf17-merged.mount: Deactivated successfully.
Dec 07 09:42:20 compute-1 podman[81405]: 2025-12-07 09:42:20.893893659 +0000 UTC m=+0.151843769 container remove cf1cb7f9b7facc3116b4a2b5a51ff4b857f9259f6734d3188acfaf31f0c9e863 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:42:20 compute-1 systemd[1]: libpod-conmon-cf1cb7f9b7facc3116b4a2b5a51ff4b857f9259f6734d3188acfaf31f0c9e863.scope: Deactivated successfully.
Dec 07 09:42:21 compute-1 sudo[81365]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:21 compute-1 sudo[81448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:42:21 compute-1 sudo[81448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:21 compute-1 sudo[81448]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:21 compute-1 ceph-mon[80077]: Reconfiguring crash.compute-1 (monmap changed)...
Dec 07 09:42:21 compute-1 ceph-mon[80077]: Reconfiguring daemon crash.compute-1 on compute-1
Dec 07 09:42:21 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/995242702' entity='client.admin' 
Dec 07 09:42:21 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:21 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:21 compute-1 ceph-mon[80077]: Reconfiguring osd.1 (monmap changed)...
Dec 07 09:42:21 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 07 09:42:21 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:21 compute-1 ceph-mon[80077]: Reconfiguring daemon osd.1 on compute-1
Dec 07 09:42:21 compute-1 ceph-mon[80077]: pgmap v112: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:21 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:21 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:21 compute-1 ceph-mon[80077]: Reconfiguring mon.compute-1 (monmap changed)...
Dec 07 09:42:21 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 07 09:42:21 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 07 09:42:21 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:21 compute-1 ceph-mon[80077]: Reconfiguring daemon mon.compute-1 on compute-1
Dec 07 09:42:21 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3060147273' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 07 09:42:21 compute-1 sudo[81473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:21 compute-1 sudo[81473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:21 compute-1 podman[81514]: 2025-12-07 09:42:21.50215149 +0000 UTC m=+0.034008460 container create ccec88ad3390da23f0e498f5052d2ac5cb288ef023210dc1ef78bee0f8637e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:42:21 compute-1 systemd[1]: Started libpod-conmon-ccec88ad3390da23f0e498f5052d2ac5cb288ef023210dc1ef78bee0f8637e45.scope.
Dec 07 09:42:21 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:42:21 compute-1 podman[81514]: 2025-12-07 09:42:21.577681564 +0000 UTC m=+0.109538584 container init ccec88ad3390da23f0e498f5052d2ac5cb288ef023210dc1ef78bee0f8637e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:42:21 compute-1 podman[81514]: 2025-12-07 09:42:21.488519538 +0000 UTC m=+0.020376528 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:42:21 compute-1 podman[81514]: 2025-12-07 09:42:21.588050837 +0000 UTC m=+0.119907837 container start ccec88ad3390da23f0e498f5052d2ac5cb288ef023210dc1ef78bee0f8637e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 07 09:42:21 compute-1 suspicious_kepler[81530]: 167 167
Dec 07 09:42:21 compute-1 systemd[1]: libpod-ccec88ad3390da23f0e498f5052d2ac5cb288ef023210dc1ef78bee0f8637e45.scope: Deactivated successfully.
Dec 07 09:42:21 compute-1 conmon[81530]: conmon ccec88ad3390da23f0e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ccec88ad3390da23f0e498f5052d2ac5cb288ef023210dc1ef78bee0f8637e45.scope/container/memory.events
Dec 07 09:42:21 compute-1 podman[81514]: 2025-12-07 09:42:21.59184386 +0000 UTC m=+0.123700880 container attach ccec88ad3390da23f0e498f5052d2ac5cb288ef023210dc1ef78bee0f8637e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:42:21 compute-1 podman[81514]: 2025-12-07 09:42:21.594138923 +0000 UTC m=+0.125995943 container died ccec88ad3390da23f0e498f5052d2ac5cb288ef023210dc1ef78bee0f8637e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 07 09:42:21 compute-1 systemd[1]: var-lib-containers-storage-overlay-60c73c969b9e44b9faeed80562cfd23b192102e0ccc9edbc19125bb1db1e445b-merged.mount: Deactivated successfully.
Dec 07 09:42:21 compute-1 podman[81514]: 2025-12-07 09:42:21.628199374 +0000 UTC m=+0.160056344 container remove ccec88ad3390da23f0e498f5052d2ac5cb288ef023210dc1ef78bee0f8637e45 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Dec 07 09:42:21 compute-1 systemd[1]: libpod-conmon-ccec88ad3390da23f0e498f5052d2ac5cb288ef023210dc1ef78bee0f8637e45.scope: Deactivated successfully.
Dec 07 09:42:21 compute-1 sudo[81473]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:42:22 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:22 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:22 compute-1 ceph-mon[80077]: Reconfiguring mon.compute-2 (monmap changed)...
Dec 07 09:42:22 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 07 09:42:22 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 07 09:42:22 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:22 compute-1 ceph-mon[80077]: Reconfiguring daemon mon.compute-2 on compute-2
Dec 07 09:42:22 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3060147273' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 07 09:42:22 compute-1 ceph-mon[80077]: mgrmap e12: compute-0.dotugk(active, since 2m), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:22 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:22 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:22 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ntknug", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 07 09:42:22 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 09:42:22 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:23 compute-1 sudo[81546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:42:23 compute-1 sudo[81546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:23 compute-1 sudo[81546]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:23 compute-1 sudo[81571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 07 09:42:23 compute-1 sudo[81571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:23 compute-1 ceph-mon[80077]: Reconfiguring mgr.compute-2.ntknug (monmap changed)...
Dec 07 09:42:23 compute-1 ceph-mon[80077]: Reconfiguring daemon mgr.compute-2.ntknug on compute-2
Dec 07 09:42:23 compute-1 ceph-mon[80077]: pgmap v113: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:23 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:23 compute-1 ceph-mon[80077]: from='mgr.14122 192.168.122.100:0/2409854747' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2991289158' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 07 09:42:23 compute-1 podman[81667]: 2025-12-07 09:42:23.970856287 +0000 UTC m=+0.052938228 container exec 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 07 09:42:24 compute-1 podman[81667]: 2025-12-07 09:42:24.070978582 +0000 UTC m=+0.153060493 container exec_died 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:42:24 compute-1 sudo[81571]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  1: '-n'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  2: 'mgr.compute-1.buauyv'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  3: '-f'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  4: '--setuser'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  5: 'ceph'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  6: '--setgroup'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  7: 'ceph'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  8: '--default-log-to-file=false'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  9: '--default-log-to-journald=true'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr respawn  exe_path /proc/self/exe
Dec 07 09:42:24 compute-1 sshd-session[72870]: Read error from remote host 192.168.122.100 port 34910: Connection reset by peer
Dec 07 09:42:24 compute-1 sshd-session[72785]: Connection closed by 192.168.122.100 port 37174
Dec 07 09:42:24 compute-1 sshd-session[72814]: Connection closed by 192.168.122.100 port 37176
Dec 07 09:42:24 compute-1 sshd-session[72669]: Connection closed by 192.168.122.100 port 37144
Dec 07 09:42:24 compute-1 sshd-session[72756]: Connection closed by 192.168.122.100 port 37158
Dec 07 09:42:24 compute-1 sshd-session[72841]: Connection closed by 192.168.122.100 port 34902
Dec 07 09:42:24 compute-1 sshd-session[72727]: Connection closed by 192.168.122.100 port 37156
Dec 07 09:42:24 compute-1 sshd-session[72698]: Connection closed by 192.168.122.100 port 37148
Dec 07 09:42:24 compute-1 sshd-session[72582]: Connection closed by 192.168.122.100 port 37126
Dec 07 09:42:24 compute-1 sshd-session[72640]: Connection closed by 192.168.122.100 port 37142
Dec 07 09:42:24 compute-1 sshd-session[72611]: Connection closed by 192.168.122.100 port 37130
Dec 07 09:42:24 compute-1 sshd-session[72581]: Connection closed by 192.168.122.100 port 37118
Dec 07 09:42:24 compute-1 sshd-session[72838]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:24 compute-1 sshd-session[72608]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:24 compute-1 sshd-session[72559]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:24 compute-1 sshd-session[72782]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:24 compute-1 sshd-session[72811]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:24 compute-1 sshd-session[72695]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:24 compute-1 sshd-session[72666]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:24 compute-1 sshd-session[72637]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:24 compute-1 sshd-session[72576]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:24 compute-1 sshd-session[72867]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:24 compute-1 sshd-session[72724]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:24 compute-1 sshd-session[72753]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:24 compute-1 systemd[1]: session-26.scope: Deactivated successfully.
Dec 07 09:42:24 compute-1 systemd[1]: session-22.scope: Deactivated successfully.
Dec 07 09:42:24 compute-1 systemd[1]: session-27.scope: Deactivated successfully.
Dec 07 09:42:24 compute-1 systemd[1]: session-20.scope: Deactivated successfully.
Dec 07 09:42:24 compute-1 systemd[1]: session-25.scope: Deactivated successfully.
Dec 07 09:42:24 compute-1 systemd[1]: session-28.scope: Deactivated successfully.
Dec 07 09:42:24 compute-1 systemd[1]: session-32.scope: Deactivated successfully.
Dec 07 09:42:24 compute-1 systemd[1]: session-32.scope: Consumed 1min 3.643s CPU time.
Dec 07 09:42:24 compute-1 systemd[1]: session-31.scope: Deactivated successfully.
Dec 07 09:42:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: ignoring --setuser ceph since I am not root
Dec 07 09:42:24 compute-1 systemd[1]: session-24.scope: Deactivated successfully.
Dec 07 09:42:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: ignoring --setgroup ceph since I am not root
Dec 07 09:42:24 compute-1 systemd[1]: session-29.scope: Deactivated successfully.
Dec 07 09:42:24 compute-1 systemd[1]: session-30.scope: Deactivated successfully.
Dec 07 09:42:24 compute-1 systemd[1]: session-23.scope: Deactivated successfully.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Session 23 logged out. Waiting for processes to exit.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Session 26 logged out. Waiting for processes to exit.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Session 30 logged out. Waiting for processes to exit.
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 07 09:42:24 compute-1 systemd-logind[796]: Session 20 logged out. Waiting for processes to exit.
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: pidfile_write: ignore empty --pid-file
Dec 07 09:42:24 compute-1 systemd-logind[796]: Session 25 logged out. Waiting for processes to exit.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Session 31 logged out. Waiting for processes to exit.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Session 32 logged out. Waiting for processes to exit.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Session 24 logged out. Waiting for processes to exit.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Session 28 logged out. Waiting for processes to exit.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Session 27 logged out. Waiting for processes to exit.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Session 22 logged out. Waiting for processes to exit.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Session 29 logged out. Waiting for processes to exit.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Removed session 26.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Removed session 22.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Removed session 27.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Removed session 20.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Removed session 25.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Removed session 28.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Removed session 32.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Removed session 31.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Removed session 24.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Removed session 29.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Removed session 30.
Dec 07 09:42:24 compute-1 systemd-logind[796]: Removed session 23.
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'alerts'
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'balancer'
Dec 07 09:42:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:24.595+0000 7efc40968140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 07 09:42:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:24.682+0000 7efc40968140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 07 09:42:24 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'cephadm'
Dec 07 09:42:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2991289158' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 07 09:42:25 compute-1 ceph-mon[80077]: mgrmap e13: compute-0.dotugk(active, since 2m), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:25 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'crash'
Dec 07 09:42:25 compute-1 ceph-mgr[80383]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 07 09:42:25 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'dashboard'
Dec 07 09:42:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:25.479+0000 7efc40968140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 07 09:42:26 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'devicehealth'
Dec 07 09:42:26 compute-1 ceph-mgr[80383]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 07 09:42:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:26.124+0000 7efc40968140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 07 09:42:26 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'diskprediction_local'
Dec 07 09:42:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 07 09:42:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 07 09:42:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]:   from numpy import show_config as show_numpy_config
Dec 07 09:42:26 compute-1 ceph-mgr[80383]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 07 09:42:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:26.285+0000 7efc40968140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 07 09:42:26 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'influx'
Dec 07 09:42:26 compute-1 ceph-mgr[80383]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 07 09:42:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:26.356+0000 7efc40968140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 07 09:42:26 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'insights'
Dec 07 09:42:26 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'iostat'
Dec 07 09:42:26 compute-1 ceph-mgr[80383]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 07 09:42:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:26.483+0000 7efc40968140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 07 09:42:26 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'k8sevents'
Dec 07 09:42:26 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'localpool'
Dec 07 09:42:26 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'mds_autoscaler'
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'mirroring'
Dec 07 09:42:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'nfs'
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'orchestrator'
Dec 07 09:42:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:27.452+0000 7efc40968140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 07 09:42:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:27.650+0000 7efc40968140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'osd_perf_query'
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 07 09:42:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:27.719+0000 7efc40968140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'osd_support'
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 07 09:42:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:27.783+0000 7efc40968140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'pg_autoscaler'
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'progress'
Dec 07 09:42:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:27.856+0000 7efc40968140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 07 09:42:27 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'prometheus'
Dec 07 09:42:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:27.922+0000 7efc40968140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 07 09:42:28 compute-1 ceph-mgr[80383]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 07 09:42:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:28.245+0000 7efc40968140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 07 09:42:28 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rbd_support'
Dec 07 09:42:28 compute-1 ceph-mgr[80383]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 07 09:42:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:28.334+0000 7efc40968140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 07 09:42:28 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'restful'
Dec 07 09:42:28 compute-1 sshd-session[81240]: Connection closed by 3.137.73.221 port 54644 [preauth]
Dec 07 09:42:28 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rgw'
Dec 07 09:42:28 compute-1 ceph-mgr[80383]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 07 09:42:28 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rook'
Dec 07 09:42:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:28.757+0000 7efc40968140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 07 09:42:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:29.304+0000 7efc40968140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'selftest'
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 07 09:42:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:29.374+0000 7efc40968140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'snap_schedule'
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'stats'
Dec 07 09:42:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:29.458+0000 7efc40968140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'status'
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 07 09:42:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:29.601+0000 7efc40968140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'telegraf'
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'telemetry'
Dec 07 09:42:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:29.675+0000 7efc40968140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 07 09:42:29 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'test_orchestrator'
Dec 07 09:42:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:29.835+0000 7efc40968140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 07 09:42:30 compute-1 ceph-mgr[80383]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 07 09:42:30 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'volumes'
Dec 07 09:42:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:30.057+0000 7efc40968140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 07 09:42:30 compute-1 ceph-mgr[80383]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 07 09:42:30 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'zabbix'
Dec 07 09:42:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:30.319+0000 7efc40968140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 07 09:42:30 compute-1 ceph-mgr[80383]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 07 09:42:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:30.390+0000 7efc40968140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 07 09:42:30 compute-1 ceph-mgr[80383]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 07 09:42:30 compute-1 ceph-mgr[80383]: mgr load Constructed class from module: dashboard
Dec 07 09:42:30 compute-1 ceph-mgr[80383]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Dec 07 09:42:30 compute-1 ceph-mgr[80383]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 07 09:42:30 compute-1 ceph-mgr[80383]: [dashboard INFO root] Starting engine...
Dec 07 09:42:30 compute-1 ceph-mgr[80383]: ms_deliver_dispatch: unhandled message 0x55cc49c8b860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 07 09:42:30 compute-1 ceph-mon[80077]: Standby manager daemon compute-1.buauyv restarted
Dec 07 09:42:30 compute-1 ceph-mon[80077]: Standby manager daemon compute-1.buauyv started
Dec 07 09:42:30 compute-1 ceph-mon[80077]: Standby manager daemon compute-2.ntknug restarted
Dec 07 09:42:30 compute-1 ceph-mon[80077]: Standby manager daemon compute-2.ntknug started
Dec 07 09:42:30 compute-1 ceph-mgr[80383]: [dashboard INFO root] Engine started...
Dec 07 09:42:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e33 e33: 3 total, 3 up, 3 in
Dec 07 09:42:31 compute-1 sshd-session[81796]: Accepted publickey for ceph-admin from 192.168.122.100 port 38802 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:42:31 compute-1 systemd-logind[796]: New session 33 of user ceph-admin.
Dec 07 09:42:31 compute-1 systemd[1]: Started Session 33 of User ceph-admin.
Dec 07 09:42:31 compute-1 sshd-session[81796]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:42:31 compute-1 sudo[81800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:42:31 compute-1 sudo[81800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:31 compute-1 sudo[81800]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:31 compute-1 sudo[81825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 07 09:42:31 compute-1 sudo[81825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:31 compute-1 ceph-mon[80077]: mgrmap e14: compute-0.dotugk(active, since 3m), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:31 compute-1 ceph-mon[80077]: Active manager daemon compute-0.dotugk restarted
Dec 07 09:42:31 compute-1 ceph-mon[80077]: Activating manager daemon compute-0.dotugk
Dec 07 09:42:31 compute-1 ceph-mon[80077]: osdmap e33: 3 total, 3 up, 3 in
Dec 07 09:42:31 compute-1 ceph-mon[80077]: mgrmap e15: compute-0.dotugk(active, starting, since 0.03578s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ntknug", "id": "compute-2.ntknug"}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.buauyv", "id": "compute-1.buauyv"}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: Manager daemon compute-0.dotugk is now available
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec 07 09:42:31 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec 07 09:42:31 compute-1 podman[81923]: 2025-12-07 09:42:31.812947829 +0000 UTC m=+0.056642808 container exec 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 07 09:42:31 compute-1 podman[81923]: 2025-12-07 09:42:31.937956996 +0000 UTC m=+0.181651975 container exec_died 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 07 09:42:32 compute-1 sudo[81825]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:42:32 compute-1 sudo[82009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:42:32 compute-1 sudo[82009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:32 compute-1 sudo[82009]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:32 compute-1 sudo[82034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:42:32 compute-1 sudo[82034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:32 compute-1 ceph-mon[80077]: mgrmap e16: compute-0.dotugk(active, since 1.06172s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:32 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:32 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:32 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:32 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:32 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:32 compute-1 ceph-mon[80077]: [07/Dec/2025:09:42:32] ENGINE Bus STARTING
Dec 07 09:42:32 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:32 compute-1 sudo[82034]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:32 compute-1 sudo[82091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:42:32 compute-1 sudo[82091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:32 compute-1 sudo[82091]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:33 compute-1 sudo[82116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 07 09:42:33 compute-1 sudo[82116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:33 compute-1 sudo[82116]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:33 compute-1 ceph-mon[80077]: [07/Dec/2025:09:42:32] ENGINE Serving on https://192.168.122.100:7150
Dec 07 09:42:33 compute-1 ceph-mon[80077]: from='client.24164 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 09:42:33 compute-1 ceph-mon[80077]: [07/Dec/2025:09:42:32] ENGINE Client ('192.168.122.100', 41934) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 07 09:42:33 compute-1 ceph-mon[80077]: pgmap v4: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:33 compute-1 ceph-mon[80077]: [07/Dec/2025:09:42:32] ENGINE Serving on http://192.168.122.100:8765
Dec 07 09:42:33 compute-1 ceph-mon[80077]: [07/Dec/2025:09:42:32] ENGINE Bus STARTED
Dec 07 09:42:33 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:33 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:33 compute-1 ceph-mon[80077]: mgrmap e17: compute-0.dotugk(active, since 2s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:33 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:33 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:33 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:33 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 07 09:42:33 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:33 compute-1 ceph-mon[80077]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec 07 09:42:33 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 07 09:42:33 compute-1 ceph-mon[80077]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec 07 09:42:33 compute-1 ceph-mon[80077]: Unable to set osd_memory_target on compute-1 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 07 09:42:33 compute-1 ceph-mon[80077]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 07 09:42:33 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:34 compute-1 sudo[82159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 07 09:42:34 compute-1 sudo[82159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:34 compute-1 sudo[82159]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:34 compute-1 sudo[82184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph
Dec 07 09:42:34 compute-1 sudo[82184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:34 compute-1 sudo[82184]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:34 compute-1 sudo[82209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:42:34 compute-1 sudo[82209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:34 compute-1 sudo[82209]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:34 compute-1 sudo[82234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:34 compute-1 sudo[82234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:34 compute-1 sudo[82234]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:34 compute-1 sudo[82259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:42:34 compute-1 sudo[82259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:34 compute-1 sudo[82259]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:34 compute-1 sudo[82307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:42:34 compute-1 sudo[82307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:34 compute-1 sudo[82307]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:34 compute-1 sudo[82332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:42:34 compute-1 sudo[82332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:34 compute-1 sudo[82332]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:34 compute-1 sudo[82357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 07 09:42:34 compute-1 sudo[82357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:34 compute-1 sudo[82357]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:34 compute-1 sudo[82382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:42:34 compute-1 sudo[82382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:34 compute-1 sudo[82382]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:34 compute-1 sudo[82407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:42:34 compute-1 sudo[82407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:34 compute-1 sudo[82407]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:34 compute-1 sudo[82432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:42:34 compute-1 sudo[82432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:34 compute-1 sudo[82432]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:34 compute-1 sudo[82457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:34 compute-1 sudo[82457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:34 compute-1 sudo[82457]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:42:35 compute-1 sudo[82482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82482]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 ceph-mon[80077]: from='client.14409 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 09:42:35 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:35 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:35 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 07 09:42:35 compute-1 ceph-mon[80077]: Adjusting osd_memory_target on compute-2 to 128.0M
Dec 07 09:42:35 compute-1 ceph-mon[80077]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 07 09:42:35 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:35 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:42:35 compute-1 ceph-mon[80077]: Updating compute-0:/etc/ceph/ceph.conf
Dec 07 09:42:35 compute-1 ceph-mon[80077]: Updating compute-1:/etc/ceph/ceph.conf
Dec 07 09:42:35 compute-1 ceph-mon[80077]: Updating compute-2:/etc/ceph/ceph.conf
Dec 07 09:42:35 compute-1 ceph-mon[80077]: from='client.14415 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 09:42:35 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:35 compute-1 sudo[82530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:42:35 compute-1 sudo[82530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82530]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:42:35 compute-1 sudo[82555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82555]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:42:35 compute-1 sudo[82580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82580]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 07 09:42:35 compute-1 sudo[82605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82605]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph
Dec 07 09:42:35 compute-1 sudo[82630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82630]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:42:35 compute-1 sudo[82655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82655]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:35 compute-1 sudo[82680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82680]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:42:35 compute-1 sudo[82705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82705]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:42:35 compute-1 sudo[82753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82753]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:42:35 compute-1 sudo[82778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82778]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 07 09:42:35 compute-1 sudo[82803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82803]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:42:35 compute-1 sudo[82828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82828]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:35 compute-1 sudo[82853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:42:35 compute-1 sudo[82853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:35 compute-1 sudo[82853]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:36 compute-1 sudo[82878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:42:36 compute-1 sudo[82878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:36 compute-1 sudo[82878]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:36 compute-1 sudo[82903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:36 compute-1 sudo[82903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:36 compute-1 sudo[82903]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:36 compute-1 sudo[82928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:42:36 compute-1 sudo[82928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:36 compute-1 sudo[82928]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:36 compute-1 sudo[82976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:42:36 compute-1 sudo[82976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:36 compute-1 sudo[82976]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:36 compute-1 sudo[83001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:42:36 compute-1 sudo[83001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:36 compute-1 sudo[83001]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:36 compute-1 ceph-mon[80077]: pgmap v5: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:36 compute-1 ceph-mon[80077]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:42:36 compute-1 ceph-mon[80077]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:42:36 compute-1 ceph-mon[80077]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:42:36 compute-1 ceph-mon[80077]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 07 09:42:36 compute-1 ceph-mon[80077]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 07 09:42:36 compute-1 ceph-mon[80077]: from='client.14421 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 09:42:36 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:36 compute-1 ceph-mon[80077]: mgrmap e18: compute-0.dotugk(active, since 4s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:36 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2509102800' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 07 09:42:36 compute-1 sudo[83026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:42:36 compute-1 sudo[83026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:36 compute-1 sudo[83026]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  1: '-n'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  2: 'mgr.compute-1.buauyv'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  3: '-f'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  4: '--setuser'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  5: 'ceph'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  6: '--setgroup'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  7: 'ceph'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  8: '--default-log-to-file=false'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  9: '--default-log-to-journald=true'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr respawn  exe_path /proc/self/exe
Dec 07 09:42:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: ignoring --setuser ceph since I am not root
Dec 07 09:42:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: ignoring --setgroup ceph since I am not root
Dec 07 09:42:36 compute-1 sshd-session[81799]: Connection closed by 192.168.122.100 port 38802
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: pidfile_write: ignore empty --pid-file
Dec 07 09:42:36 compute-1 sshd-session[81796]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:42:36 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Dec 07 09:42:36 compute-1 systemd[1]: session-33.scope: Consumed 4.568s CPU time.
Dec 07 09:42:36 compute-1 systemd-logind[796]: Session 33 logged out. Waiting for processes to exit.
Dec 07 09:42:36 compute-1 systemd-logind[796]: Removed session 33.
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'alerts'
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'balancer'
Dec 07 09:42:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:36.750+0000 7fe6136ae140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 07 09:42:36 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'cephadm'
Dec 07 09:42:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:36.828+0000 7fe6136ae140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 07 09:42:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:42:37 compute-1 ceph-mon[80077]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 07 09:42:37 compute-1 ceph-mon[80077]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:42:37 compute-1 ceph-mon[80077]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:42:37 compute-1 ceph-mon[80077]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:42:37 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:37 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:37 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:37 compute-1 ceph-mon[80077]: from='mgr.14364 192.168.122.100:0/166470804' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:37 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2509102800' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 07 09:42:37 compute-1 ceph-mon[80077]: mgrmap e19: compute-0.dotugk(active, since 6s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:37 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'crash'
Dec 07 09:42:37 compute-1 ceph-mgr[80383]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 07 09:42:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:37.611+0000 7fe6136ae140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 07 09:42:37 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'dashboard'
Dec 07 09:42:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'devicehealth'
Dec 07 09:42:38 compute-1 ceph-mgr[80383]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 07 09:42:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'diskprediction_local'
Dec 07 09:42:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:38.224+0000 7fe6136ae140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 07 09:42:38 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3673904121' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 07 09:42:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 07 09:42:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 07 09:42:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]:   from numpy import show_config as show_numpy_config
Dec 07 09:42:38 compute-1 ceph-mgr[80383]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 07 09:42:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'influx'
Dec 07 09:42:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:38.394+0000 7fe6136ae140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 07 09:42:38 compute-1 ceph-mgr[80383]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 07 09:42:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:38.481+0000 7fe6136ae140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 07 09:42:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'insights'
Dec 07 09:42:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'iostat'
Dec 07 09:42:38 compute-1 ceph-mgr[80383]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 07 09:42:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'k8sevents'
Dec 07 09:42:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:38.622+0000 7fe6136ae140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 07 09:42:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'localpool'
Dec 07 09:42:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'mds_autoscaler'
Dec 07 09:42:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'mirroring'
Dec 07 09:42:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'nfs'
Dec 07 09:42:39 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3673904121' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 07 09:42:39 compute-1 ceph-mon[80077]: mgrmap e20: compute-0.dotugk(active, since 7s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:39 compute-1 ceph-mgr[80383]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 07 09:42:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:39.640+0000 7fe6136ae140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 07 09:42:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'orchestrator'
Dec 07 09:42:39 compute-1 ceph-mgr[80383]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 07 09:42:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:39.862+0000 7fe6136ae140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 07 09:42:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'osd_perf_query'
Dec 07 09:42:39 compute-1 ceph-mgr[80383]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 07 09:42:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:39.936+0000 7fe6136ae140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 07 09:42:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'osd_support'
Dec 07 09:42:40 compute-1 ceph-mgr[80383]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 07 09:42:40 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'pg_autoscaler'
Dec 07 09:42:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:40.004+0000 7fe6136ae140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 07 09:42:40 compute-1 ceph-mgr[80383]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 07 09:42:40 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'progress'
Dec 07 09:42:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:40.082+0000 7fe6136ae140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 07 09:42:40 compute-1 ceph-mgr[80383]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 07 09:42:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:40.155+0000 7fe6136ae140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 07 09:42:40 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'prometheus'
Dec 07 09:42:40 compute-1 ceph-mgr[80383]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 07 09:42:40 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rbd_support'
Dec 07 09:42:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:40.495+0000 7fe6136ae140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 07 09:42:40 compute-1 ceph-mgr[80383]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 07 09:42:40 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'restful'
Dec 07 09:42:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:40.586+0000 7fe6136ae140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 07 09:42:40 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rgw'
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rook'
Dec 07 09:42:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:41.019+0000 7fe6136ae140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'selftest'
Dec 07 09:42:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:41.547+0000 7fe6136ae140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'snap_schedule'
Dec 07 09:42:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:41.617+0000 7fe6136ae140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'stats'
Dec 07 09:42:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:41.695+0000 7fe6136ae140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'status'
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'telegraf'
Dec 07 09:42:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:41.847+0000 7fe6136ae140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 07 09:42:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'telemetry'
Dec 07 09:42:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:41.916+0000 7fe6136ae140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'test_orchestrator'
Dec 07 09:42:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:42.071+0000 7fe6136ae140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 07 09:42:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'volumes'
Dec 07 09:42:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:42.288+0000 7fe6136ae140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'zabbix'
Dec 07 09:42:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:42.563+0000 7fe6136ae140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 07 09:42:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:42.642+0000 7fe6136ae140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: ms_deliver_dispatch: unhandled message 0x55d12849f860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  1: '-n'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  2: 'mgr.compute-1.buauyv'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  3: '-f'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  4: '--setuser'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  5: 'ceph'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  6: '--setgroup'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  7: 'ceph'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  8: '--default-log-to-file=false'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  9: '--default-log-to-journald=true'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr respawn  exe_path /proc/self/exe
Dec 07 09:42:42 compute-1 ceph-mon[80077]: Standby manager daemon compute-2.ntknug restarted
Dec 07 09:42:42 compute-1 ceph-mon[80077]: Standby manager daemon compute-2.ntknug started
Dec 07 09:42:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Dec 07 09:42:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: ignoring --setuser ceph since I am not root
Dec 07 09:42:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: ignoring --setgroup ceph since I am not root
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: pidfile_write: ignore empty --pid-file
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'alerts'
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'balancer'
Dec 07 09:42:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:42.862+0000 7ff760571140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 07 09:42:42 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'cephadm'
Dec 07 09:42:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:42.936+0000 7ff760571140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 07 09:42:43 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'crash'
Dec 07 09:42:43 compute-1 ceph-mon[80077]: mgrmap e21: compute-0.dotugk(active, since 12s), standbys: compute-1.buauyv, compute-2.ntknug
Dec 07 09:42:43 compute-1 ceph-mon[80077]: Standby manager daemon compute-1.buauyv restarted
Dec 07 09:42:43 compute-1 ceph-mon[80077]: Standby manager daemon compute-1.buauyv started
Dec 07 09:42:43 compute-1 ceph-mon[80077]: Active manager daemon compute-0.dotugk restarted
Dec 07 09:42:43 compute-1 ceph-mon[80077]: Activating manager daemon compute-0.dotugk
Dec 07 09:42:43 compute-1 ceph-mon[80077]: osdmap e34: 3 total, 3 up, 3 in
Dec 07 09:42:43 compute-1 ceph-mon[80077]: mgrmap e22: compute-0.dotugk(active, starting, since 0.0286199s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:43 compute-1 ceph-mgr[80383]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 07 09:42:43 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'dashboard'
Dec 07 09:42:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:43.709+0000 7ff760571140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 07 09:42:44 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'devicehealth'
Dec 07 09:42:44 compute-1 ceph-mgr[80383]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 07 09:42:44 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'diskprediction_local'
Dec 07 09:42:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:44.330+0000 7ff760571140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 07 09:42:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 07 09:42:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 07 09:42:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]:   from numpy import show_config as show_numpy_config
Dec 07 09:42:44 compute-1 ceph-mgr[80383]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 07 09:42:44 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'influx'
Dec 07 09:42:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:44.496+0000 7ff760571140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 07 09:42:44 compute-1 ceph-mgr[80383]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 07 09:42:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:44.565+0000 7ff760571140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 07 09:42:44 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'insights'
Dec 07 09:42:44 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'iostat'
Dec 07 09:42:44 compute-1 ceph-mgr[80383]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 07 09:42:44 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'k8sevents'
Dec 07 09:42:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:44.705+0000 7ff760571140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 07 09:42:45 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'localpool'
Dec 07 09:42:45 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'mds_autoscaler'
Dec 07 09:42:45 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'mirroring'
Dec 07 09:42:45 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'nfs'
Dec 07 09:42:45 compute-1 ceph-mgr[80383]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 07 09:42:45 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'orchestrator'
Dec 07 09:42:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:45.733+0000 7ff760571140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 07 09:42:45 compute-1 ceph-mgr[80383]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 07 09:42:45 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'osd_perf_query'
Dec 07 09:42:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:45.957+0000 7ff760571140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'osd_support'
Dec 07 09:42:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:46.033+0000 7ff760571140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'pg_autoscaler'
Dec 07 09:42:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:46.099+0000 7ff760571140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'progress'
Dec 07 09:42:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:46.204+0000 7ff760571140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:46.278+0000 7ff760571140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'prometheus'
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:46.604+0000 7ff760571140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rbd_support'
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:46.696+0000 7ff760571140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'restful'
Dec 07 09:42:46 compute-1 systemd[1]: Stopping User Manager for UID 42477...
Dec 07 09:42:46 compute-1 systemd[72563]: Activating special unit Exit the Session...
Dec 07 09:42:46 compute-1 systemd[72563]: Stopped target Main User Target.
Dec 07 09:42:46 compute-1 systemd[72563]: Stopped target Basic System.
Dec 07 09:42:46 compute-1 systemd[72563]: Stopped target Paths.
Dec 07 09:42:46 compute-1 systemd[72563]: Stopped target Sockets.
Dec 07 09:42:46 compute-1 systemd[72563]: Stopped target Timers.
Dec 07 09:42:46 compute-1 systemd[72563]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 07 09:42:46 compute-1 systemd[72563]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 07 09:42:46 compute-1 systemd[72563]: Closed D-Bus User Message Bus Socket.
Dec 07 09:42:46 compute-1 systemd[72563]: Stopped Create User's Volatile Files and Directories.
Dec 07 09:42:46 compute-1 systemd[72563]: Removed slice User Application Slice.
Dec 07 09:42:46 compute-1 systemd[72563]: Reached target Shutdown.
Dec 07 09:42:46 compute-1 systemd[72563]: Finished Exit the Session.
Dec 07 09:42:46 compute-1 systemd[72563]: Reached target Exit the Session.
Dec 07 09:42:46 compute-1 systemd[1]: user@42477.service: Deactivated successfully.
Dec 07 09:42:46 compute-1 systemd[1]: Stopped User Manager for UID 42477.
Dec 07 09:42:46 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 07 09:42:46 compute-1 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 07 09:42:46 compute-1 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 07 09:42:46 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 07 09:42:46 compute-1 systemd[1]: Removed slice User Slice of UID 42477.
Dec 07 09:42:46 compute-1 systemd[1]: user-42477.slice: Consumed 1min 9.545s CPU time.
Dec 07 09:42:46 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rgw'
Dec 07 09:42:47 compute-1 ceph-mgr[80383]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 07 09:42:47 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rook'
Dec 07 09:42:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:47.121+0000 7ff760571140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 07 09:42:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:42:47 compute-1 ceph-mgr[80383]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 07 09:42:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:47.661+0000 7ff760571140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 07 09:42:47 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'selftest'
Dec 07 09:42:47 compute-1 ceph-mgr[80383]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 07 09:42:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:47.733+0000 7ff760571140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 07 09:42:47 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'snap_schedule'
Dec 07 09:42:47 compute-1 ceph-mgr[80383]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 07 09:42:47 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'stats'
Dec 07 09:42:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:47.824+0000 7ff760571140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 07 09:42:47 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'status'
Dec 07 09:42:47 compute-1 ceph-mgr[80383]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 07 09:42:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:47.971+0000 7ff760571140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 07 09:42:47 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'telegraf'
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 07 09:42:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:48.037+0000 7ff760571140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'telemetry'
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'test_orchestrator'
Dec 07 09:42:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:48.184+0000 7ff760571140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 07 09:42:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:48.401+0000 7ff760571140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'volumes'
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 07 09:42:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:48.648+0000 7ff760571140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'zabbix'
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 07 09:42:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:42:48.723+0000 7ff760571140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: mgr load Constructed class from module: dashboard
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: ms_deliver_dispatch: unhandled message 0x55f9cc027860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: [dashboard INFO root] Starting engine...
Dec 07 09:42:48 compute-1 ceph-mgr[80383]: [dashboard INFO root] Engine started...
Dec 07 09:42:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Dec 07 09:42:49 compute-1 ceph-mon[80077]: Standby manager daemon compute-2.ntknug restarted
Dec 07 09:42:49 compute-1 ceph-mon[80077]: Standby manager daemon compute-2.ntknug started
Dec 07 09:42:49 compute-1 sshd-session[83127]: Accepted publickey for ceph-admin from 192.168.122.100 port 42436 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:42:49 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Dec 07 09:42:49 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 07 09:42:49 compute-1 systemd-logind[796]: New session 34 of user ceph-admin.
Dec 07 09:42:49 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 07 09:42:49 compute-1 systemd[1]: Starting User Manager for UID 42477...
Dec 07 09:42:49 compute-1 systemd[83131]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:42:50 compute-1 systemd[83131]: Queued start job for default target Main User Target.
Dec 07 09:42:50 compute-1 systemd[83131]: Created slice User Application Slice.
Dec 07 09:42:50 compute-1 systemd[83131]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 07 09:42:50 compute-1 systemd[83131]: Started Daily Cleanup of User's Temporary Directories.
Dec 07 09:42:50 compute-1 systemd[83131]: Reached target Paths.
Dec 07 09:42:50 compute-1 systemd[83131]: Reached target Timers.
Dec 07 09:42:50 compute-1 systemd[83131]: Starting D-Bus User Message Bus Socket...
Dec 07 09:42:50 compute-1 systemd[83131]: Starting Create User's Volatile Files and Directories...
Dec 07 09:42:50 compute-1 systemd[83131]: Listening on D-Bus User Message Bus Socket.
Dec 07 09:42:50 compute-1 systemd[83131]: Reached target Sockets.
Dec 07 09:42:50 compute-1 systemd[83131]: Finished Create User's Volatile Files and Directories.
Dec 07 09:42:50 compute-1 systemd[83131]: Reached target Basic System.
Dec 07 09:42:50 compute-1 systemd[83131]: Reached target Main User Target.
Dec 07 09:42:50 compute-1 systemd[83131]: Startup finished in 122ms.
Dec 07 09:42:50 compute-1 systemd[1]: Started User Manager for UID 42477.
Dec 07 09:42:50 compute-1 systemd[1]: Started Session 34 of User ceph-admin.
Dec 07 09:42:50 compute-1 sshd-session[83127]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:42:50 compute-1 sudo[83147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:42:50 compute-1 sudo[83147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:50 compute-1 sudo[83147]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:50 compute-1 sudo[83172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 07 09:42:50 compute-1 sudo[83172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:50 compute-1 ceph-mon[80077]: Standby manager daemon compute-1.buauyv restarted
Dec 07 09:42:50 compute-1 ceph-mon[80077]: Standby manager daemon compute-1.buauyv started
Dec 07 09:42:50 compute-1 ceph-mon[80077]: Active manager daemon compute-0.dotugk restarted
Dec 07 09:42:50 compute-1 ceph-mon[80077]: Activating manager daemon compute-0.dotugk
Dec 07 09:42:50 compute-1 ceph-mon[80077]: mgrmap e23: compute-0.dotugk(active, starting, since 6s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:50 compute-1 ceph-mon[80077]: osdmap e35: 3 total, 3 up, 3 in
Dec 07 09:42:50 compute-1 ceph-mon[80077]: mgrmap e24: compute-0.dotugk(active, starting, since 0.0970144s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ntknug", "id": "compute-2.ntknug"}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.buauyv", "id": "compute-1.buauyv"}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: Manager daemon compute-0.dotugk is now available
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec 07 09:42:50 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec 07 09:42:50 compute-1 podman[83270]: 2025-12-07 09:42:50.822790328 +0000 UTC m=+0.175843631 container exec 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:42:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e2 new map
Dec 07 09:42:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e2 print_map
                                           e2
                                           btime 2025-12-07T09:42:50:843512+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-07T09:42:50.843467+0000
                                           modified        2025-12-07T09:42:50.843467+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 07 09:42:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Dec 07 09:42:50 compute-1 podman[83270]: 2025-12-07 09:42:50.915940698 +0000 UTC m=+0.268994001 container exec_died 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:42:51 compute-1 sudo[83172]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:51 compute-1 sudo[83355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:42:51 compute-1 sudo[83355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:51 compute-1 sudo[83355]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:51 compute-1 sudo[83380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:42:51 compute-1 sudo[83380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:51 compute-1 ceph-mon[80077]: [07/Dec/2025:09:42:50] ENGINE Bus STARTING
Dec 07 09:42:51 compute-1 ceph-mon[80077]: mgrmap e25: compute-0.dotugk(active, since 1.50064s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 07 09:42:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 07 09:42:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 07 09:42:51 compute-1 ceph-mon[80077]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 07 09:42:51 compute-1 ceph-mon[80077]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 07 09:42:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 07 09:42:51 compute-1 ceph-mon[80077]: osdmap e36: 3 total, 3 up, 3 in
Dec 07 09:42:51 compute-1 ceph-mon[80077]: fsmap cephfs:0
Dec 07 09:42:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:51 compute-1 sudo[83380]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:51 compute-1 sudo[83436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:42:51 compute-1 sudo[83436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:51 compute-1 sudo[83436]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:52 compute-1 sudo[83461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 07 09:42:52 compute-1 sudo[83461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:42:52 compute-1 sudo[83461]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='client.14448 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 09:42:52 compute-1 ceph-mon[80077]: pgmap v3: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:52 compute-1 ceph-mon[80077]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 07 09:42:52 compute-1 ceph-mon[80077]: [07/Dec/2025:09:42:50] ENGINE Serving on https://192.168.122.100:7150
Dec 07 09:42:52 compute-1 ceph-mon[80077]: [07/Dec/2025:09:42:50] ENGINE Client ('192.168.122.100', 44880) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 07 09:42:52 compute-1 ceph-mon[80077]: [07/Dec/2025:09:42:50] ENGINE Serving on http://192.168.122.100:8765
Dec 07 09:42:52 compute-1 ceph-mon[80077]: [07/Dec/2025:09:42:50] ENGINE Bus STARTED
Dec 07 09:42:52 compute-1 ceph-mon[80077]: pgmap v5: 69 pgs: 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 09:42:52 compute-1 ceph-mon[80077]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 07 09:42:52 compute-1 ceph-mon[80077]: mgrmap e26: compute-0.dotugk(active, since 2s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:42:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:42:52 compute-1 sudo[83504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 07 09:42:52 compute-1 sudo[83504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:52 compute-1 sudo[83504]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:52 compute-1 sudo[83529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph
Dec 07 09:42:52 compute-1 sudo[83529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:52 compute-1 sudo[83529]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:42:53 compute-1 sudo[83554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83554]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:53 compute-1 sudo[83579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83579]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:42:53 compute-1 sudo[83604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83604]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:42:53 compute-1 sudo[83652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83652]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:42:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Dec 07 09:42:53 compute-1 sudo[83677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83677]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 07 09:42:53 compute-1 sudo[83702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83702]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:42:53 compute-1 sudo[83727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83727]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:42:53 compute-1 sudo[83752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83752]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:42:53 compute-1 sudo[83777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83777]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:53 compute-1 sudo[83802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83802]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:42:53 compute-1 sudo[83827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83827]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 ceph-mon[80077]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec 07 09:42:53 compute-1 ceph-mon[80077]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec 07 09:42:53 compute-1 ceph-mon[80077]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 07 09:42:53 compute-1 ceph-mon[80077]: Unable to set osd_memory_target on compute-1 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Dec 07 09:42:53 compute-1 ceph-mon[80077]: from='client.14493 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 09:42:53 compute-1 ceph-mon[80077]: Adjusting osd_memory_target on compute-2 to 128.0M
Dec 07 09:42:53 compute-1 ceph-mon[80077]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 07 09:42:53 compute-1 ceph-mon[80077]: Updating compute-0:/etc/ceph/ceph.conf
Dec 07 09:42:53 compute-1 ceph-mon[80077]: Updating compute-1:/etc/ceph/ceph.conf
Dec 07 09:42:53 compute-1 ceph-mon[80077]: Updating compute-2:/etc/ceph/ceph.conf
Dec 07 09:42:53 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 07 09:42:53 compute-1 ceph-mon[80077]: osdmap e37: 3 total, 3 up, 3 in
Dec 07 09:42:53 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 07 09:42:53 compute-1 ceph-mon[80077]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:42:53 compute-1 ceph-mon[80077]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:42:53 compute-1 ceph-mon[80077]: pgmap v7: 70 pgs: 1 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:53 compute-1 ceph-mon[80077]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:42:53 compute-1 ceph-mon[80077]: mgrmap e27: compute-0.dotugk(active, since 4s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:53 compute-1 sudo[83875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:42:53 compute-1 sudo[83875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83875]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:42:53 compute-1 sudo[83900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83900]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:53 compute-1 sudo[83925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:42:53 compute-1 sudo[83925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:53 compute-1 sudo[83925]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 sudo[83950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 07 09:42:54 compute-1 sudo[83950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[83950]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 sudo[83975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph
Dec 07 09:42:54 compute-1 sudo[83975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[83975]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 sudo[84000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:42:54 compute-1 sudo[84000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[84000]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 sudo[84025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:54 compute-1 sudo[84025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[84025]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 sudo[84050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:42:54 compute-1 sudo[84050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[84050]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Dec 07 09:42:54 compute-1 sudo[84098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:42:54 compute-1 sudo[84098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[84098]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 sudo[84123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:42:54 compute-1 sudo[84123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[84123]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 sudo[84148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 07 09:42:54 compute-1 sudo[84148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[84148]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 sudo[84173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:42:54 compute-1 sudo[84173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[84173]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 sudo[84198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:42:54 compute-1 sudo[84198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[84198]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 sudo[84223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:42:54 compute-1 sudo[84223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[84223]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 ceph-mon[80077]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 07 09:42:54 compute-1 ceph-mon[80077]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 07 09:42:54 compute-1 ceph-mon[80077]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 07 09:42:54 compute-1 ceph-mon[80077]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 07 09:42:54 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 07 09:42:54 compute-1 ceph-mon[80077]: osdmap e38: 3 total, 3 up, 3 in
Dec 07 09:42:54 compute-1 ceph-mon[80077]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 07 09:42:54 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:54 compute-1 ceph-mon[80077]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 07 09:42:54 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:54 compute-1 ceph-mon[80077]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:42:54 compute-1 ceph-mon[80077]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:42:54 compute-1 sudo[84248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:54 compute-1 sudo[84248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[84248]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:54 compute-1 sudo[84273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:42:54 compute-1 sudo[84273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:54 compute-1 sudo[84273]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:55 compute-1 sudo[84321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:42:55 compute-1 sudo[84321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:55 compute-1 sudo[84321]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:55 compute-1 sudo[84346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:42:55 compute-1 sudo[84346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:55 compute-1 sudo[84346]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:55 compute-1 sudo[84371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:42:55 compute-1 sudo[84371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:55 compute-1 sudo[84371]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Dec 07 09:42:56 compute-1 ceph-mon[80077]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:42:56 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:56 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:56 compute-1 ceph-mon[80077]: osdmap e39: 3 total, 3 up, 3 in
Dec 07 09:42:56 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:56 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:56 compute-1 ceph-mon[80077]: pgmap v10: 70 pgs: 1 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:42:56 compute-1 ceph-mon[80077]: mgrmap e28: compute-0.dotugk(active, since 6s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:42:56 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:56 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:56 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:56 compute-1 ceph-mon[80077]: Deploying daemon node-exporter.compute-0 on compute-0
Dec 07 09:42:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:42:57 compute-1 ceph-mon[80077]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 07 09:42:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3498920725' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec 07 09:42:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3498920725' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 07 09:42:58 compute-1 sudo[84396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:42:58 compute-1 sudo[84396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:58 compute-1 sudo[84396]: pam_unix(sudo:session): session closed for user root
Dec 07 09:42:58 compute-1 sudo[84421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:42:58 compute-1 sudo[84421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:42:58 compute-1 ceph-mon[80077]: pgmap v11: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 07 09:42:58 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:58 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:58 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:42:58 compute-1 systemd[1]: Reloading.
Dec 07 09:42:58 compute-1 systemd-rc-local-generator[84508]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:42:58 compute-1 systemd-sysv-generator[84512]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:42:58 compute-1 systemd[1]: Reloading.
Dec 07 09:42:58 compute-1 systemd-sysv-generator[84556]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:42:58 compute-1 systemd-rc-local-generator[84552]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:42:58 compute-1 systemd[1]: Starting Ceph node-exporter.compute-1 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:42:59 compute-1 bash[84610]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec 07 09:42:59 compute-1 ceph-mon[80077]: Deploying daemon node-exporter.compute-1 on compute-1
Dec 07 09:42:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3896900130' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 07 09:42:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3827221640' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 09:42:59 compute-1 bash[84610]: Getting image source signatures
Dec 07 09:42:59 compute-1 bash[84610]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec 07 09:42:59 compute-1 bash[84610]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec 07 09:42:59 compute-1 bash[84610]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec 07 09:43:00 compute-1 bash[84610]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec 07 09:43:00 compute-1 bash[84610]: Writing manifest to image destination
Dec 07 09:43:00 compute-1 podman[84610]: 2025-12-07 09:43:00.11203109 +0000 UTC m=+0.980174796 container create 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 09:43:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1407922ee602e0a6485feeb2e3eabdf17c433d9a8b39438e2e32f35dc1a18c6/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:00 compute-1 podman[84610]: 2025-12-07 09:43:00.094757228 +0000 UTC m=+0.962900954 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec 07 09:43:00 compute-1 podman[84610]: 2025-12-07 09:43:00.162723195 +0000 UTC m=+1.030866921 container init 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 09:43:00 compute-1 podman[84610]: 2025-12-07 09:43:00.166930472 +0000 UTC m=+1.035074168 container start 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.172Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.172Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec 07 09:43:00 compute-1 bash[84610]: 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.174Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.174Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.174Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=arp
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=bcache
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=bonding
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=cpu
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=dmi
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=edac
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=entropy
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=filefd
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=hwmon
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=netclass
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.175Z caller=node_exporter.go:117 level=info collector=netdev
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=netstat
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=nfs
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=nvme
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=os
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=pressure
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=rapl
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=selinux
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=softnet
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=stat
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=textfile
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=time
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=uname
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.176Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.177Z caller=node_exporter.go:117 level=info collector=xfs
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.177Z caller=node_exporter.go:117 level=info collector=zfs
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.177Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec 07 09:43:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1[84687]: ts=2025-12-07T09:43:00.177Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 07 09:43:00 compute-1 systemd[1]: Started Ceph node-exporter.compute-1 for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:43:00 compute-1 sudo[84421]: pam_unix(sudo:session): session closed for user root
Dec 07 09:43:00 compute-1 ceph-mon[80077]: pgmap v12: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 11 op/s
Dec 07 09:43:00 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:00 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:00 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:01 compute-1 ceph-mon[80077]: Deploying daemon node-exporter.compute-2 on compute-2
Dec 07 09:43:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3080182648' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 07 09:43:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:43:02 compute-1 ceph-mon[80077]: pgmap v13: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Dec 07 09:43:03 compute-1 ceph-mon[80077]: from='client.14529 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 07 09:43:03 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:04 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:04 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:04 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:04 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:43:04 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:43:04 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:04 compute-1 ceph-mon[80077]: pgmap v14: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 8 op/s
Dec 07 09:43:05 compute-1 ceph-mon[80077]: from='client.14535 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 07 09:43:05 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:06 compute-1 ceph-mon[80077]: pgmap v15: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s
Dec 07 09:43:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:43:07 compute-1 ceph-mon[80077]: from='client.14541 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 07 09:43:08 compute-1 ceph-mon[80077]: pgmap v16: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s
Dec 07 09:43:08 compute-1 ceph-mon[80077]: from='client.14547 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 07 09:43:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.httxcl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 07 09:43:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.httxcl", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 07 09:43:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:09 compute-1 ceph-mon[80077]: Deploying daemon rgw.rgw.compute-2.httxcl on compute-2
Dec 07 09:43:09 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1223296242' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 07 09:43:10 compute-1 sudo[84696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:43:10 compute-1 sudo[84696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:43:10 compute-1 sudo[84696]: pam_unix(sudo:session): session closed for user root
Dec 07 09:43:10 compute-1 sudo[84721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:43:10 compute-1 sudo[84721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:43:10 compute-1 podman[84786]: 2025-12-07 09:43:10.541046223 +0000 UTC m=+0.051626463 container create 8ed0e27e26dfd03d30cfd9a2dfa230655925a6a6fe5983bd2a6504ace91e484d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:43:10 compute-1 systemd[1]: Started libpod-conmon-8ed0e27e26dfd03d30cfd9a2dfa230655925a6a6fe5983bd2a6504ace91e484d.scope.
Dec 07 09:43:10 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:43:10 compute-1 podman[84786]: 2025-12-07 09:43:10.519419319 +0000 UTC m=+0.029999579 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:43:10 compute-1 podman[84786]: 2025-12-07 09:43:10.616074587 +0000 UTC m=+0.126654907 container init 8ed0e27e26dfd03d30cfd9a2dfa230655925a6a6fe5983bd2a6504ace91e484d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 07 09:43:10 compute-1 podman[84786]: 2025-12-07 09:43:10.626350594 +0000 UTC m=+0.136930824 container start 8ed0e27e26dfd03d30cfd9a2dfa230655925a6a6fe5983bd2a6504ace91e484d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 07 09:43:10 compute-1 podman[84786]: 2025-12-07 09:43:10.629101381 +0000 UTC m=+0.139681641 container attach 8ed0e27e26dfd03d30cfd9a2dfa230655925a6a6fe5983bd2a6504ace91e484d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:43:10 compute-1 recursing_poitras[84802]: 167 167
Dec 07 09:43:10 compute-1 systemd[1]: libpod-8ed0e27e26dfd03d30cfd9a2dfa230655925a6a6fe5983bd2a6504ace91e484d.scope: Deactivated successfully.
Dec 07 09:43:10 compute-1 podman[84786]: 2025-12-07 09:43:10.632543677 +0000 UTC m=+0.143123947 container died 8ed0e27e26dfd03d30cfd9a2dfa230655925a6a6fe5983bd2a6504ace91e484d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 07 09:43:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-2b149fba5ae6722e5961261f56b7ffc6933302152a295c6b06b6e115dc3e037f-merged.mount: Deactivated successfully.
Dec 07 09:43:10 compute-1 podman[84786]: 2025-12-07 09:43:10.669979062 +0000 UTC m=+0.180559292 container remove 8ed0e27e26dfd03d30cfd9a2dfa230655925a6a6fe5983bd2a6504ace91e484d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:43:10 compute-1 systemd[1]: libpod-conmon-8ed0e27e26dfd03d30cfd9a2dfa230655925a6a6fe5983bd2a6504ace91e484d.scope: Deactivated successfully.
Dec 07 09:43:10 compute-1 ceph-mon[80077]: pgmap v17: 70 pgs: 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:43:10 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:10 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:10 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:10 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cefzmy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 07 09:43:10 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.cefzmy", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 07 09:43:10 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:10 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:10 compute-1 systemd[1]: Reloading.
Dec 07 09:43:10 compute-1 systemd-rc-local-generator[84842]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:43:10 compute-1 systemd-sysv-generator[84845]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:43:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Dec 07 09:43:11 compute-1 systemd[1]: Reloading.
Dec 07 09:43:11 compute-1 systemd-rc-local-generator[84886]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:43:11 compute-1 systemd-sysv-generator[84890]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:43:11 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.cefzmy for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:43:11 compute-1 podman[84945]: 2025-12-07 09:43:11.449983309 +0000 UTC m=+0.039118874 container create 093acae4d147e0f8e1831d24903b7c252c52a4deda9a20bec2799f780583c930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-rgw-rgw-compute-1-cefzmy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 07 09:43:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbe96639b78ff9f5cf9b25da30b57ed1919f457398f40e45624314acb0a5364/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbe96639b78ff9f5cf9b25da30b57ed1919f457398f40e45624314acb0a5364/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbe96639b78ff9f5cf9b25da30b57ed1919f457398f40e45624314acb0a5364/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbe96639b78ff9f5cf9b25da30b57ed1919f457398f40e45624314acb0a5364/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.cefzmy supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:11 compute-1 podman[84945]: 2025-12-07 09:43:11.42960486 +0000 UTC m=+0.018740445 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:43:11 compute-1 podman[84945]: 2025-12-07 09:43:11.528563242 +0000 UTC m=+0.117698847 container init 093acae4d147e0f8e1831d24903b7c252c52a4deda9a20bec2799f780583c930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-rgw-rgw-compute-1-cefzmy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:43:11 compute-1 podman[84945]: 2025-12-07 09:43:11.540389232 +0000 UTC m=+0.129524797 container start 093acae4d147e0f8e1831d24903b7c252c52a4deda9a20bec2799f780583c930 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-rgw-rgw-compute-1-cefzmy, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Dec 07 09:43:11 compute-1 bash[84945]: 093acae4d147e0f8e1831d24903b7c252c52a4deda9a20bec2799f780583c930
Dec 07 09:43:11 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.cefzmy for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:43:11 compute-1 radosgw[84964]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 07 09:43:11 compute-1 radosgw[84964]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec 07 09:43:11 compute-1 radosgw[84964]: framework: beast
Dec 07 09:43:11 compute-1 radosgw[84964]: framework conf key: endpoint, val: 192.168.122.101:8082
Dec 07 09:43:11 compute-1 radosgw[84964]: init_numa not setting numa affinity
Dec 07 09:43:11 compute-1 sudo[84721]: pam_unix(sudo:session): session closed for user root
Dec 07 09:43:11 compute-1 ceph-mon[80077]: Deploying daemon rgw.rgw.compute-1.cefzmy on compute-1
Dec 07 09:43:11 compute-1 ceph-mon[80077]: osdmap e40: 3 total, 3 up, 3 in
Dec 07 09:43:11 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 07 09:43:11 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3796621305' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 07 09:43:11 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3168838345' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 07 09:43:11 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:11 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:11 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:11 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kbsleq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 07 09:43:11 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.kbsleq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 07 09:43:11 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:11 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:11 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Dec 07 09:43:11 compute-1 radosgw[84964]: rgw main: failed to create zonegroup with (17) File exists
Dec 07 09:43:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:43:12 compute-1 ceph-mon[80077]: pgmap v19: 71 pgs: 1 unknown, 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:43:12 compute-1 ceph-mon[80077]: Deploying daemon rgw.rgw.compute-0.kbsleq on compute-0
Dec 07 09:43:12 compute-1 ceph-mon[80077]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 07 09:43:12 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 07 09:43:12 compute-1 ceph-mon[80077]: osdmap e41: 3 total, 3 up, 3 in
Dec 07 09:43:12 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1764855908' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 07 09:43:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Dec 07 09:43:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 07 09:43:12 compute-1 ceph-mon[80077]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2333040374' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 07 09:43:13 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 42 pg[10.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:13 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Dec 07 09:43:13 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 43 pg[10.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:13 compute-1 ceph-mon[80077]: osdmap e42: 3 total, 3 up, 3 in
Dec 07 09:43:13 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2333040374' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 07 09:43:13 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 07 09:43:13 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 07 09:43:13 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/197616850' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 07 09:43:13 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:13 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:13 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:13 compute-1 ceph-mon[80077]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 07 09:43:13 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:13 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:13 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.rxtsyx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 07 09:43:13 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.rxtsyx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 07 09:43:13 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:13 compute-1 ceph-mon[80077]: Deploying daemon mds.cephfs.compute-2.rxtsyx on compute-2
Dec 07 09:43:13 compute-1 ceph-mon[80077]: pgmap v22: 72 pgs: 2 unknown, 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:43:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec 07 09:43:14 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3687644668' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 07 09:43:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Dec 07 09:43:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 07 09:43:14 compute-1 ceph-mon[80077]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2333040374' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 07 09:43:14 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 07 09:43:14 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 07 09:43:14 compute-1 ceph-mon[80077]: osdmap e43: 3 total, 3 up, 3 in
Dec 07 09:43:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3687644668' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 07 09:43:14 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Dec 07 09:43:15 compute-1 ceph-mon[80077]: osdmap e44: 3 total, 3 up, 3 in
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2333040374' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/197616850' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qgzqbk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qgzqbk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:15 compute-1 ceph-mon[80077]: Deploying daemon mds.cephfs.compute-0.qgzqbk on compute-0
Dec 07 09:43:15 compute-1 ceph-mon[80077]: pgmap v25: 73 pgs: 3 unknown, 70 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 07 09:43:15 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 07 09:43:15 compute-1 ceph-mon[80077]: osdmap e45: 3 total, 3 up, 3 in
Dec 07 09:43:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e3 new map
Dec 07 09:43:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e3 print_map
                                           e3
                                           btime 2025-12-07T09:43:16:384831+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-07T09:42:50.843467+0000
                                           modified        2025-12-07T09:42:50.843467+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.rxtsyx{-1:24211} state up:standby seq 1 addr [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] compat {c=[1],r=[1],i=[1fff]}]
Dec 07 09:43:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e4 new map
Dec 07 09:43:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e4 print_map
                                           e4
                                           btime 2025-12-07T09:43:16:410151+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-07T09:42:50.843467+0000
                                           modified        2025-12-07T09:43:16.410136+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24211}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.rxtsyx{0:24211} state up:creating seq 1 addr [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 07 09:43:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Dec 07 09:43:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 07 09:43:16 compute-1 ceph-mon[80077]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2333040374' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 07 09:43:17 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 46 pg[12.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:17 compute-1 sudo[85552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:43:17 compute-1 sudo[85552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:43:17 compute-1 sudo[85552]: pam_unix(sudo:session): session closed for user root
Dec 07 09:43:17 compute-1 sudo[85577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:43:17 compute-1 sudo[85577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:43:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:43:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e5 new map
Dec 07 09:43:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e5 print_map
                                           e5
                                           btime 2025-12-07T09:43:17:384030+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-07T09:42:50.843467+0000
                                           modified        2025-12-07T09:43:17.384027+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24211}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24211 members: 24211
                                           [mds.cephfs.compute-2.rxtsyx{0:24211} state up:active seq 2 addr [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.qgzqbk{-1:14604} state up:standby seq 1 addr [v2:192.168.122.100:6806/3084821969,v1:192.168.122.100:6807/3084821969] compat {c=[1],r=[1],i=[1fff]}]
Dec 07 09:43:17 compute-1 ceph-mon[80077]: mds.? [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] up:boot
Dec 07 09:43:17 compute-1 ceph-mon[80077]: daemon mds.cephfs.compute-2.rxtsyx assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 07 09:43:17 compute-1 ceph-mon[80077]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 07 09:43:17 compute-1 ceph-mon[80077]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 07 09:43:17 compute-1 ceph-mon[80077]: fsmap cephfs:0 1 up:standby
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.rxtsyx"}]: dispatch
Dec 07 09:43:17 compute-1 ceph-mon[80077]: fsmap cephfs:1 {0=cephfs.compute-2.rxtsyx=up:creating}
Dec 07 09:43:17 compute-1 ceph-mon[80077]: daemon mds.cephfs.compute-2.rxtsyx is now active in filesystem cephfs as rank 0
Dec 07 09:43:17 compute-1 ceph-mon[80077]: osdmap e46: 3 total, 3 up, 3 in
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2333040374' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/197616850' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ihigcc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ihigcc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:17 compute-1 ceph-mon[80077]: mds.? [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] up:active
Dec 07 09:43:17 compute-1 ceph-mon[80077]: mds.? [v2:192.168.122.100:6806/3084821969,v1:192.168.122.100:6807/3084821969] up:boot
Dec 07 09:43:17 compute-1 ceph-mon[80077]: fsmap cephfs:1 {0=cephfs.compute-2.rxtsyx=up:active} 1 up:standby
Dec 07 09:43:17 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.qgzqbk"}]: dispatch
Dec 07 09:43:17 compute-1 podman[85640]: 2025-12-07 09:43:17.530471968 +0000 UTC m=+0.040791340 container create b0424d2af0352db0149b8584913ff2ce90646d097c813424ba31d9355ed64034 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mahavira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 07 09:43:17 compute-1 systemd[1]: Started libpod-conmon-b0424d2af0352db0149b8584913ff2ce90646d097c813424ba31d9355ed64034.scope.
Dec 07 09:43:17 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:43:17 compute-1 podman[85640]: 2025-12-07 09:43:17.509930424 +0000 UTC m=+0.020249826 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:43:17 compute-1 podman[85640]: 2025-12-07 09:43:17.6121927 +0000 UTC m=+0.122512082 container init b0424d2af0352db0149b8584913ff2ce90646d097c813424ba31d9355ed64034 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 09:43:17 compute-1 podman[85640]: 2025-12-07 09:43:17.623086584 +0000 UTC m=+0.133405976 container start b0424d2af0352db0149b8584913ff2ce90646d097c813424ba31d9355ed64034 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mahavira, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:43:17 compute-1 podman[85640]: 2025-12-07 09:43:17.626431207 +0000 UTC m=+0.136750579 container attach b0424d2af0352db0149b8584913ff2ce90646d097c813424ba31d9355ed64034 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mahavira, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 07 09:43:17 compute-1 objective_mahavira[85656]: 167 167
Dec 07 09:43:17 compute-1 systemd[1]: libpod-b0424d2af0352db0149b8584913ff2ce90646d097c813424ba31d9355ed64034.scope: Deactivated successfully.
Dec 07 09:43:17 compute-1 conmon[85656]: conmon b0424d2af0352db0149b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0424d2af0352db0149b8584913ff2ce90646d097c813424ba31d9355ed64034.scope/container/memory.events
Dec 07 09:43:17 compute-1 podman[85640]: 2025-12-07 09:43:17.632315261 +0000 UTC m=+0.142634663 container died b0424d2af0352db0149b8584913ff2ce90646d097c813424ba31d9355ed64034 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec 07 09:43:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-fa35a6cc29ae3c2445fa89572eba7008441063dcb4e8d75a7fdbee227c2c2cd6-merged.mount: Deactivated successfully.
Dec 07 09:43:17 compute-1 podman[85640]: 2025-12-07 09:43:17.683541281 +0000 UTC m=+0.193860683 container remove b0424d2af0352db0149b8584913ff2ce90646d097c813424ba31d9355ed64034 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Dec 07 09:43:17 compute-1 systemd[1]: libpod-conmon-b0424d2af0352db0149b8584913ff2ce90646d097c813424ba31d9355ed64034.scope: Deactivated successfully.
Dec 07 09:43:17 compute-1 systemd[1]: Reloading.
Dec 07 09:43:17 compute-1 systemd-rc-local-generator[85700]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:43:17 compute-1 systemd-sysv-generator[85705]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:43:18 compute-1 systemd[1]: Reloading.
Dec 07 09:43:18 compute-1 systemd-sysv-generator[85741]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:43:18 compute-1 systemd-rc-local-generator[85738]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:43:18 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.ihigcc for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:43:18 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Dec 07 09:43:18 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 07 09:43:18 compute-1 ceph-mon[80077]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2333040374' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 07 09:43:18 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 47 pg[12.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:18 compute-1 ceph-mon[80077]: Deploying daemon mds.cephfs.compute-1.ihigcc on compute-1
Dec 07 09:43:18 compute-1 ceph-mon[80077]: pgmap v28: 74 pgs: 1 unknown, 1 creating+peering, 72 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 1.7 KiB/s wr, 9 op/s
Dec 07 09:43:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 07 09:43:18 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 07 09:43:18 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 07 09:43:18 compute-1 ceph-mon[80077]: osdmap e47: 3 total, 3 up, 3 in
Dec 07 09:43:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 07 09:43:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2333040374' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 07 09:43:18 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 07 09:43:18 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 07 09:43:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/197616850' entity='client.rgw.rgw.compute-2.httxcl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 07 09:43:18 compute-1 podman[85802]: 2025-12-07 09:43:18.534155899 +0000 UTC m=+0.046440778 container create 6a308179ca58d49c1f2acd1c5314d52000fa6dece54508fbc9699a5f9fe2aa19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mds-cephfs-compute-1-ihigcc, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 07 09:43:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961ba9c5c36a8195e4421bdd0f4c3a48547f7ea955a04df42759d529df91197a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961ba9c5c36a8195e4421bdd0f4c3a48547f7ea955a04df42759d529df91197a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961ba9c5c36a8195e4421bdd0f4c3a48547f7ea955a04df42759d529df91197a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961ba9c5c36a8195e4421bdd0f4c3a48547f7ea955a04df42759d529df91197a/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.ihigcc supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:18 compute-1 podman[85802]: 2025-12-07 09:43:18.604241026 +0000 UTC m=+0.116525985 container init 6a308179ca58d49c1f2acd1c5314d52000fa6dece54508fbc9699a5f9fe2aa19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mds-cephfs-compute-1-ihigcc, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:43:18 compute-1 podman[85802]: 2025-12-07 09:43:18.514916872 +0000 UTC m=+0.027201791 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:43:18 compute-1 podman[85802]: 2025-12-07 09:43:18.614307818 +0000 UTC m=+0.126592727 container start 6a308179ca58d49c1f2acd1c5314d52000fa6dece54508fbc9699a5f9fe2aa19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mds-cephfs-compute-1-ihigcc, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec 07 09:43:18 compute-1 bash[85802]: 6a308179ca58d49c1f2acd1c5314d52000fa6dece54508fbc9699a5f9fe2aa19
Dec 07 09:43:18 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.ihigcc for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:43:18 compute-1 sudo[85577]: pam_unix(sudo:session): session closed for user root
Dec 07 09:43:18 compute-1 ceph-mds[85822]: set uid:gid to 167:167 (ceph:ceph)
Dec 07 09:43:18 compute-1 ceph-mds[85822]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec 07 09:43:18 compute-1 ceph-mds[85822]: main not setting numa affinity
Dec 07 09:43:18 compute-1 ceph-mds[85822]: pidfile_write: ignore empty --pid-file
Dec 07 09:43:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mds-cephfs-compute-1-ihigcc[85818]: starting mds.cephfs.compute-1.ihigcc at 
Dec 07 09:43:18 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Updating MDS map to version 5 from mon.2
Dec 07 09:43:19 compute-1 sudo[85841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:43:19 compute-1 sudo[85841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:43:19 compute-1 sudo[85841]: pam_unix(sudo:session): session closed for user root
Dec 07 09:43:19 compute-1 sudo[85866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:43:19 compute-1 sudo[85866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:43:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Dec 07 09:43:19 compute-1 podman[85930]: 2025-12-07 09:43:19.575114141 +0000 UTC m=+0.053705030 container create e6617ac9bed128b0710a53bb550276e2d000eb935e61c8481e06c2309021ad04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 07 09:43:19 compute-1 radosgw[84964]: v1 topic migration: starting v1 topic migration..
Dec 07 09:43:19 compute-1 radosgw[84964]: v1 topic migration: finished v1 topic migration
Dec 07 09:43:19 compute-1 radosgw[84964]: LDAP not started since no server URIs were provided in the configuration.
Dec 07 09:43:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-rgw-rgw-compute-1-cefzmy[84960]: 2025-12-07T09:43:19.606+0000 7fdd19531980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec 07 09:43:19 compute-1 systemd[1]: Started libpod-conmon-e6617ac9bed128b0710a53bb550276e2d000eb935e61c8481e06c2309021ad04.scope.
Dec 07 09:43:19 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec 07 09:43:19 compute-1 radosgw[84964]: framework: beast
Dec 07 09:43:19 compute-1 radosgw[84964]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 07 09:43:19 compute-1 radosgw[84964]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 07 09:43:19 compute-1 podman[85930]: 2025-12-07 09:43:19.547760928 +0000 UTC m=+0.026351807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:43:19 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec 07 09:43:19 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:43:19 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Dec 07 09:43:19 compute-1 radosgw[84964]: starting handler: beast
Dec 07 09:43:19 compute-1 radosgw[84964]: set uid:gid to 167:167 (ceph:ceph)
Dec 07 09:43:19 compute-1 radosgw[84964]: mgrc service_daemon_register rgw.24266 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.cefzmy,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=15adcc20-c494-4e96-8c9d-9a9668d901cf,zone_name=default,zonegroup_id=33ce195e-0f10-43f1-a319-f93c51bac89f,zonegroup_name=default}
Dec 07 09:43:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e6 new map
Dec 07 09:43:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e6 print_map
                                           e6
                                           btime 2025-12-07T09:43:19:670484+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-07T09:42:50.843467+0000
                                           modified        2025-12-07T09:43:17.384027+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24211}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24211 members: 24211
                                           [mds.cephfs.compute-2.rxtsyx{0:24211} state up:active seq 2 addr [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.qgzqbk{-1:14604} state up:standby seq 1 addr [v2:192.168.122.100:6806/3084821969,v1:192.168.122.100:6807/3084821969] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.ihigcc{-1:24293} state up:standby seq 1 addr [v2:192.168.122.101:6804/1729259208,v1:192.168.122.101:6805/1729259208] compat {c=[1],r=[1],i=[1fff]}]
Dec 07 09:43:19 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Updating MDS map to version 6 from mon.2
Dec 07 09:43:19 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Monitors have assigned me to become a standby
Dec 07 09:43:19 compute-1 podman[85930]: 2025-12-07 09:43:19.689174635 +0000 UTC m=+0.167765524 container init e6617ac9bed128b0710a53bb550276e2d000eb935e61c8481e06c2309021ad04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 07 09:43:19 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Dec 07 09:43:19 compute-1 podman[85930]: 2025-12-07 09:43:19.700008038 +0000 UTC m=+0.178598887 container start e6617ac9bed128b0710a53bb550276e2d000eb935e61c8481e06c2309021ad04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_gould, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:19 compute-1 ceph-mon[80077]: Creating key for client.nfs.cephfs.0.0.compute-1.jddrlu
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 07 09:43:19 compute-1 ceph-mon[80077]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.jddrlu-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1792322983' entity='client.rgw.rgw.compute-0.kbsleq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-1.cefzmy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='client.? ' entity='client.rgw.rgw.compute-2.httxcl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 07 09:43:19 compute-1 ceph-mon[80077]: osdmap e48: 3 total, 3 up, 3 in
Dec 07 09:43:19 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:19 compute-1 relaxed_gould[85978]: 167 167
Dec 07 09:43:19 compute-1 systemd[1]: libpod-e6617ac9bed128b0710a53bb550276e2d000eb935e61c8481e06c2309021ad04.scope: Deactivated successfully.
Dec 07 09:43:19 compute-1 podman[85930]: 2025-12-07 09:43:19.7079613 +0000 UTC m=+0.186552199 container attach e6617ac9bed128b0710a53bb550276e2d000eb935e61c8481e06c2309021ad04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_gould, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 07 09:43:19 compute-1 podman[85930]: 2025-12-07 09:43:19.708507625 +0000 UTC m=+0.187098514 container died e6617ac9bed128b0710a53bb550276e2d000eb935e61c8481e06c2309021ad04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_gould, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:43:19 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Dec 07 09:43:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-3557d3ce71178c50d52516c297327cd37be5f988dabd23486bd740fb20cb7cfd-merged.mount: Deactivated successfully.
Dec 07 09:43:19 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Dec 07 09:43:19 compute-1 podman[85930]: 2025-12-07 09:43:19.759346015 +0000 UTC m=+0.237936874 container remove e6617ac9bed128b0710a53bb550276e2d000eb935e61c8481e06c2309021ad04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 09:43:19 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Dec 07 09:43:19 compute-1 systemd[1]: libpod-conmon-e6617ac9bed128b0710a53bb550276e2d000eb935e61c8481e06c2309021ad04.scope: Deactivated successfully.
Dec 07 09:43:19 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Dec 07 09:43:19 compute-1 systemd[1]: Reloading.
Dec 07 09:43:19 compute-1 systemd-sysv-generator[86032]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:43:19 compute-1 systemd-rc-local-generator[86028]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:43:20 compute-1 systemd[1]: Reloading.
Dec 07 09:43:20 compute-1 systemd-rc-local-generator[86069]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:43:20 compute-1 systemd-sysv-generator[86072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:43:20 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:43:20 compute-1 podman[86125]: 2025-12-07 09:43:20.682831387 +0000 UTC m=+0.045692716 container create 88b1575e7bf6a1c4a6a2738ad8a5b427833ca1d8abecd12798715ede0a232df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:43:20 compute-1 ceph-mon[80077]: Rados config object exists: conf-nfs.cephfs
Dec 07 09:43:20 compute-1 ceph-mon[80077]: Creating key for client.nfs.cephfs.0.0.compute-1.jddrlu-rgw
Dec 07 09:43:20 compute-1 ceph-mon[80077]: Bind address in nfs.cephfs.0.0.compute-1.jddrlu's ganesha conf is defaulting to empty
Dec 07 09:43:20 compute-1 ceph-mon[80077]: Deploying daemon nfs.cephfs.0.0.compute-1.jddrlu on compute-1
Dec 07 09:43:20 compute-1 ceph-mon[80077]: pgmap v31: 74 pgs: 1 unknown, 1 creating+peering, 72 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 1.7 KiB/s wr, 9 op/s
Dec 07 09:43:20 compute-1 ceph-mon[80077]: mds.? [v2:192.168.122.101:6804/1729259208,v1:192.168.122.101:6805/1729259208] up:boot
Dec 07 09:43:20 compute-1 ceph-mon[80077]: fsmap cephfs:1 {0=cephfs.compute-2.rxtsyx=up:active} 2 up:standby
Dec 07 09:43:20 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ihigcc"}]: dispatch
Dec 07 09:43:20 compute-1 ceph-mon[80077]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 07 09:43:20 compute-1 ceph-mon[80077]: Cluster is now healthy
Dec 07 09:43:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f248a49ab6387e5ffb2c2e007ebe14a92733484d6962af1d4eb2531219a2658c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f248a49ab6387e5ffb2c2e007ebe14a92733484d6962af1d4eb2531219a2658c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f248a49ab6387e5ffb2c2e007ebe14a92733484d6962af1d4eb2531219a2658c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f248a49ab6387e5ffb2c2e007ebe14a92733484d6962af1d4eb2531219a2658c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:20 compute-1 podman[86125]: 2025-12-07 09:43:20.750839066 +0000 UTC m=+0.113700435 container init 88b1575e7bf6a1c4a6a2738ad8a5b427833ca1d8abecd12798715ede0a232df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 07 09:43:20 compute-1 podman[86125]: 2025-12-07 09:43:20.66215563 +0000 UTC m=+0.025016949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:43:20 compute-1 podman[86125]: 2025-12-07 09:43:20.766582506 +0000 UTC m=+0.129443825 container start 88b1575e7bf6a1c4a6a2738ad8a5b427833ca1d8abecd12798715ede0a232df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec 07 09:43:20 compute-1 bash[86125]: 88b1575e7bf6a1c4a6a2738ad8a5b427833ca1d8abecd12798715ede0a232df4
Dec 07 09:43:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 09:43:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 09:43:20 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:43:20 compute-1 sudo[85866]: pam_unix(sudo:session): session closed for user root
Dec 07 09:43:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 09:43:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 09:43:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 09:43:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 09:43:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 09:43:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:43:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 07 09:43:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 07 09:43:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:43:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:43:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e7 new map
Dec 07 09:43:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e7 print_map
                                           e7
                                           btime 2025-12-07T09:43:21:377693+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-07T09:42:50.843467+0000
                                           modified        2025-12-07T09:43:20.452323+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24211}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24211 members: 24211
                                           [mds.cephfs.compute-2.rxtsyx{0:24211} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.qgzqbk{-1:14604} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3084821969,v1:192.168.122.100:6807/3084821969] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.ihigcc{-1:24293} state up:standby seq 1 addr [v2:192.168.122.101:6804/1729259208,v1:192.168.122.101:6805/1729259208] compat {c=[1],r=[1],i=[1fff]}]
Dec 07 09:43:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:21 compute-1 ceph-mon[80077]: Creating key for client.nfs.cephfs.1.0.compute-2.llxakn
Dec 07 09:43:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 07 09:43:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 07 09:43:21 compute-1 ceph-mon[80077]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 07 09:43:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 07 09:43:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 07 09:43:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:21 compute-1 ceph-mon[80077]: mds.? [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] up:active
Dec 07 09:43:21 compute-1 ceph-mon[80077]: mds.? [v2:192.168.122.100:6806/3084821969,v1:192.168.122.100:6807/3084821969] up:standby
Dec 07 09:43:21 compute-1 ceph-mon[80077]: fsmap cephfs:1 {0=cephfs.compute-2.rxtsyx=up:active} 2 up:standby
Dec 07 09:43:21 compute-1 ceph-mon[80077]: pgmap v32: 74 pgs: 74 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 245 KiB/s rd, 9.1 KiB/s wr, 460 op/s
Dec 07 09:43:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:43:23 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e8 new map
Dec 07 09:43:23 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).mds e8 print_map
                                           e8
                                           btime 2025-12-07T09:43:23:399497+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-07T09:42:50.843467+0000
                                           modified        2025-12-07T09:43:20.452323+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24211}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24211 members: 24211
                                           [mds.cephfs.compute-2.rxtsyx{0:24211} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1713004378,v1:192.168.122.102:6805/1713004378] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.qgzqbk{-1:14604} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3084821969,v1:192.168.122.100:6807/3084821969] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.ihigcc{-1:24293} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1729259208,v1:192.168.122.101:6805/1729259208] compat {c=[1],r=[1],i=[1fff]}]
Dec 07 09:43:23 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Updating MDS map to version 8 from mon.2
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000001:nfs.cephfs.0: -2
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 09:43:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:43:24 compute-1 ceph-mon[80077]: mds.? [v2:192.168.122.101:6804/1729259208,v1:192.168.122.101:6805/1729259208] up:standby
Dec 07 09:43:24 compute-1 ceph-mon[80077]: fsmap cephfs:1 {0=cephfs.compute-2.rxtsyx=up:active} 2 up:standby
Dec 07 09:43:24 compute-1 ceph-mon[80077]: pgmap v33: 74 pgs: 74 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 7.7 KiB/s wr, 389 op/s
Dec 07 09:43:24 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 07 09:43:24 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 07 09:43:24 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 07 09:43:24 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.llxakn-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 07 09:43:24 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:25 compute-1 ceph-mon[80077]: Rados config object exists: conf-nfs.cephfs
Dec 07 09:43:25 compute-1 ceph-mon[80077]: Creating key for client.nfs.cephfs.1.0.compute-2.llxakn-rgw
Dec 07 09:43:25 compute-1 ceph-mon[80077]: Bind address in nfs.cephfs.1.0.compute-2.llxakn's ganesha conf is defaulting to empty
Dec 07 09:43:25 compute-1 ceph-mon[80077]: Deploying daemon nfs.cephfs.1.0.compute-2.llxakn on compute-2
Dec 07 09:43:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:25 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:43:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:26 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:43:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:26 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:43:26 compute-1 ceph-mon[80077]: pgmap v34: 74 pgs: 74 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 6.2 KiB/s wr, 315 op/s
Dec 07 09:43:26 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:26 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:26 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:26 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 07 09:43:26 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 07 09:43:26 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 07 09:43:26 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 07 09:43:26 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:43:28 compute-1 ceph-mon[80077]: Creating key for client.nfs.cephfs.2.0.compute-0.bjrqrk
Dec 07 09:43:28 compute-1 ceph-mon[80077]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 07 09:43:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:30 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:43:30 compute-1 ceph-mon[80077]: pgmap v35: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 160 KiB/s rd, 6.3 KiB/s wr, 299 op/s
Dec 07 09:43:30 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 07 09:43:31 compute-1 ceph-mon[80077]: pgmap v36: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 144 KiB/s rd, 5.7 KiB/s wr, 269 op/s
Dec 07 09:43:31 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 07 09:43:31 compute-1 ceph-mon[80077]: Rados config object exists: conf-nfs.cephfs
Dec 07 09:43:31 compute-1 ceph-mon[80077]: Creating key for client.nfs.cephfs.2.0.compute-0.bjrqrk-rgw
Dec 07 09:43:31 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 07 09:43:31 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:31 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.bjrqrk-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 07 09:43:31 compute-1 ceph-mon[80077]: Bind address in nfs.cephfs.2.0.compute-0.bjrqrk's ganesha conf is defaulting to empty
Dec 07 09:43:31 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:43:31 compute-1 ceph-mon[80077]: Deploying daemon nfs.cephfs.2.0.compute-0.bjrqrk on compute-0
Dec 07 09:43:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:43:32 compute-1 sudo[86196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:43:32 compute-1 sudo[86196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:43:32 compute-1 sudo[86196]: pam_unix(sudo:session): session closed for user root
Dec 07 09:43:32 compute-1 sudo[86221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:43:32 compute-1 sudo[86221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:43:32 compute-1 ceph-mon[80077]: pgmap v37: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 5.6 KiB/s wr, 229 op/s
Dec 07 09:43:32 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:32 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:32 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:32 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:32 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:32 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:43:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:32 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:43:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:32 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:43:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:32 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:43:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:32 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:43:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:32 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:43:33 compute-1 ceph-mon[80077]: Deploying daemon haproxy.nfs.cephfs.compute-1.kwciua on compute-1
Dec 07 09:43:34 compute-1 ceph-mon[80077]: pgmap v38: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.4 KiB/s wr, 18 op/s
Dec 07 09:43:35 compute-1 podman[86286]: 2025-12-07 09:43:35.622540093 +0000 UTC m=+2.719652300 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 07 09:43:35 compute-1 podman[86286]: 2025-12-07 09:43:35.637942622 +0000 UTC m=+2.735054809 container create 62408d662e7b21a1a2a8b3d234c608aad257e1ba3d7f5cb94db8197f508549ad (image=quay.io/ceph/haproxy:2.3, name=nervous_ride)
Dec 07 09:43:35 compute-1 systemd[1]: Started libpod-conmon-62408d662e7b21a1a2a8b3d234c608aad257e1ba3d7f5cb94db8197f508549ad.scope.
Dec 07 09:43:35 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:43:35 compute-1 podman[86286]: 2025-12-07 09:43:35.692024583 +0000 UTC m=+2.789136790 container init 62408d662e7b21a1a2a8b3d234c608aad257e1ba3d7f5cb94db8197f508549ad (image=quay.io/ceph/haproxy:2.3, name=nervous_ride)
Dec 07 09:43:35 compute-1 podman[86286]: 2025-12-07 09:43:35.699395878 +0000 UTC m=+2.796508065 container start 62408d662e7b21a1a2a8b3d234c608aad257e1ba3d7f5cb94db8197f508549ad (image=quay.io/ceph/haproxy:2.3, name=nervous_ride)
Dec 07 09:43:35 compute-1 podman[86286]: 2025-12-07 09:43:35.702301329 +0000 UTC m=+2.799413546 container attach 62408d662e7b21a1a2a8b3d234c608aad257e1ba3d7f5cb94db8197f508549ad (image=quay.io/ceph/haproxy:2.3, name=nervous_ride)
Dec 07 09:43:35 compute-1 nervous_ride[86401]: 0 0
Dec 07 09:43:35 compute-1 systemd[1]: libpod-62408d662e7b21a1a2a8b3d234c608aad257e1ba3d7f5cb94db8197f508549ad.scope: Deactivated successfully.
Dec 07 09:43:35 compute-1 podman[86286]: 2025-12-07 09:43:35.70410625 +0000 UTC m=+2.801218437 container died 62408d662e7b21a1a2a8b3d234c608aad257e1ba3d7f5cb94db8197f508549ad (image=quay.io/ceph/haproxy:2.3, name=nervous_ride)
Dec 07 09:43:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-6b00be0e102684d125a9e81e05a47efb37bbd7901f50f70f2b0163cc3bcd8bc5-merged.mount: Deactivated successfully.
Dec 07 09:43:35 compute-1 podman[86286]: 2025-12-07 09:43:35.743386817 +0000 UTC m=+2.840499014 container remove 62408d662e7b21a1a2a8b3d234c608aad257e1ba3d7f5cb94db8197f508549ad (image=quay.io/ceph/haproxy:2.3, name=nervous_ride)
Dec 07 09:43:35 compute-1 systemd[1]: libpod-conmon-62408d662e7b21a1a2a8b3d234c608aad257e1ba3d7f5cb94db8197f508549ad.scope: Deactivated successfully.
Dec 07 09:43:35 compute-1 systemd[1]: Reloading.
Dec 07 09:43:35 compute-1 systemd-sysv-generator[86454]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:43:35 compute-1 systemd-rc-local-generator[86446]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:43:36 compute-1 systemd[1]: Reloading.
Dec 07 09:43:36 compute-1 systemd-rc-local-generator[86492]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:43:36 compute-1 systemd-sysv-generator[86496]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:43:36 compute-1 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-1.kwciua for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:43:36 compute-1 ceph-mon[80077]: pgmap v39: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.4 KiB/s wr, 18 op/s
Dec 07 09:43:36 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:36 compute-1 podman[86551]: 2025-12-07 09:43:36.646173301 +0000 UTC m=+0.045159092 container create beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 09:43:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40fffa86e3fc9db9f7da6442f702308d9c4304d923498eb2cad54171a35d6bb5/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:36 compute-1 podman[86551]: 2025-12-07 09:43:36.701596318 +0000 UTC m=+0.100582159 container init beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 09:43:36 compute-1 podman[86551]: 2025-12-07 09:43:36.70703352 +0000 UTC m=+0.106019371 container start beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 09:43:36 compute-1 bash[86551]: beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f
Dec 07 09:43:36 compute-1 podman[86551]: 2025-12-07 09:43:36.627044157 +0000 UTC m=+0.026029928 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 07 09:43:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [NOTICE] 340/094336 (2) : New worker #1 (4) forked
Dec 07 09:43:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:36 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:36 compute-1 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-1.kwciua for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:43:36 compute-1 sudo[86221]: pam_unix(sudo:session): session closed for user root
Dec 07 09:43:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:43:37 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:37 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:37 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:38 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:38 compute-1 ceph-mon[80077]: Deploying daemon haproxy.nfs.cephfs.compute-0.ieiboq on compute-0
Dec 07 09:43:38 compute-1 ceph-mon[80077]: pgmap v40: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.4 KiB/s wr, 23 op/s
Dec 07 09:43:39 compute-1 ceph-mon[80077]: pgmap v41: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 07 09:43:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:40 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:41 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:42 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:42 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:42 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:42 compute-1 ceph-mon[80077]: Deploying daemon haproxy.nfs.cephfs.compute-2.lkwxww on compute-2
Dec 07 09:43:42 compute-1 ceph-mon[80077]: pgmap v42: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 07 09:43:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:43:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:42 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:43 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f800025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:44 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f640016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:44 compute-1 ceph-mon[80077]: pgmap v43: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 07 09:43:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:45 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:45 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:46 compute-1 ceph-mon[80077]: pgmap v44: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 07 09:43:46 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:46 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:46 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:46 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f800025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:47 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f640016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:43:47 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:47 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 07 09:43:47 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 07 09:43:47 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 07 09:43:47 compute-1 ceph-mon[80077]: Deploying daemon keepalived.nfs.cephfs.compute-2.yjewfr on compute-2
Dec 07 09:43:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:48 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:48 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:48 compute-1 ceph-mon[80077]: pgmap v45: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 07 09:43:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:49 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f800025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:49 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f640016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:49 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 07 09:43:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Dec 07 09:43:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:50 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:51 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:51 compute-1 ceph-mon[80077]: pgmap v46: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:43:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 07 09:43:51 compute-1 ceph-mon[80077]: osdmap e49: 3 total, 3 up, 3 in
Dec 07 09:43:51 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 07 09:43:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Dec 07 09:43:51 compute-1 sudo[86585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:43:51 compute-1 sudo[86585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:43:51 compute-1 sudo[86585]: pam_unix(sudo:session): session closed for user root
Dec 07 09:43:51 compute-1 sudo[86610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:43:51 compute-1 sudo[86610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:43:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:51 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f800025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Dec 07 09:43:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 07 09:43:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:52 compute-1 ceph-mon[80077]: osdmap e50: 3 total, 3 up, 3 in
Dec 07 09:43:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec 07 09:43:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:52 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 07 09:43:52 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 07 09:43:52 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 07 09:43:52 compute-1 ceph-mon[80077]: Deploying daemon keepalived.nfs.cephfs.compute-1.gawwbe on compute-1
Dec 07 09:43:52 compute-1 ceph-mon[80077]: pgmap v49: 74 pgs: 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 07 09:43:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 07 09:43:52 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 07 09:43:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:43:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:52 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f800025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:53 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Dec 07 09:43:53 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec 07 09:43:53 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 07 09:43:53 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 07 09:43:53 compute-1 ceph-mon[80077]: osdmap e51: 3 total, 3 up, 3 in
Dec 07 09:43:53 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 07 09:43:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:53 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Dec 07 09:43:54 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 07 09:43:54 compute-1 ceph-mon[80077]: osdmap e52: 3 total, 3 up, 3 in
Dec 07 09:43:54 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 07 09:43:54 compute-1 ceph-mon[80077]: 5.19 scrub starts
Dec 07 09:43:54 compute-1 ceph-mon[80077]: 5.19 scrub ok
Dec 07 09:43:54 compute-1 ceph-mon[80077]: pgmap v52: 136 pgs: 62 unknown, 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:43:54 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 07 09:43:54 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec 07 09:43:54 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 07 09:43:54 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 07 09:43:54 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec 07 09:43:54 compute-1 ceph-mon[80077]: osdmap e53: 3 total, 3 up, 3 in
Dec 07 09:43:54 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 53 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=53 pruub=11.825730324s) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active pruub 202.152069092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:43:54 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 53 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=53 pruub=11.825730324s) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown pruub 202.152069092s@ mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:54 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:54 compute-1 podman[86676]: 2025-12-07 09:43:54.802200741 +0000 UTC m=+3.060029384 container create 0207d5518a90f725563f71a0a206315761b6390cdb1e7bf394e7aaa7899234c7 (image=quay.io/ceph/keepalived:2.2.4, name=gracious_chaum, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, distribution-scope=public, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1793, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, version=2.2.4)
Dec 07 09:43:54 compute-1 systemd[1]: Started libpod-conmon-0207d5518a90f725563f71a0a206315761b6390cdb1e7bf394e7aaa7899234c7.scope.
Dec 07 09:43:54 compute-1 podman[86676]: 2025-12-07 09:43:54.780942958 +0000 UTC m=+3.038771651 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 07 09:43:54 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:43:54 compute-1 podman[86676]: 2025-12-07 09:43:54.916783725 +0000 UTC m=+3.174612378 container init 0207d5518a90f725563f71a0a206315761b6390cdb1e7bf394e7aaa7899234c7 (image=quay.io/ceph/keepalived:2.2.4, name=gracious_chaum, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, vcs-type=git, release=1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.buildah.version=1.28.2, distribution-scope=public, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, name=keepalived)
Dec 07 09:43:54 compute-1 podman[86676]: 2025-12-07 09:43:54.929745704 +0000 UTC m=+3.187574367 container start 0207d5518a90f725563f71a0a206315761b6390cdb1e7bf394e7aaa7899234c7 (image=quay.io/ceph/keepalived:2.2.4, name=gracious_chaum, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, description=keepalived for Ceph, vcs-type=git, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vendor=Red Hat, Inc., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793)
Dec 07 09:43:54 compute-1 podman[86676]: 2025-12-07 09:43:54.933886131 +0000 UTC m=+3.191714784 container attach 0207d5518a90f725563f71a0a206315761b6390cdb1e7bf394e7aaa7899234c7 (image=quay.io/ceph/keepalived:2.2.4, name=gracious_chaum, vcs-type=git, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, release=1793)
Dec 07 09:43:54 compute-1 gracious_chaum[86772]: 0 0
Dec 07 09:43:54 compute-1 systemd[1]: libpod-0207d5518a90f725563f71a0a206315761b6390cdb1e7bf394e7aaa7899234c7.scope: Deactivated successfully.
Dec 07 09:43:54 compute-1 podman[86676]: 2025-12-07 09:43:54.943025162 +0000 UTC m=+3.200853795 container died 0207d5518a90f725563f71a0a206315761b6390cdb1e7bf394e7aaa7899234c7 (image=quay.io/ceph/keepalived:2.2.4, name=gracious_chaum, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, com.redhat.component=keepalived-container, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, description=keepalived for Ceph, vcs-type=git, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., name=keepalived)
Dec 07 09:43:54 compute-1 systemd[1]: var-lib-containers-storage-overlay-3352f41b065b555ce28f831d9a7ccb0291725847ac8bb479f69b82c997611ed6-merged.mount: Deactivated successfully.
Dec 07 09:43:54 compute-1 podman[86676]: 2025-12-07 09:43:54.986115867 +0000 UTC m=+3.243944500 container remove 0207d5518a90f725563f71a0a206315761b6390cdb1e7bf394e7aaa7899234c7 (image=quay.io/ceph/keepalived:2.2.4, name=gracious_chaum, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vendor=Red Hat, Inc., name=keepalived, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., release=1793, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.expose-services=, com.redhat.component=keepalived-container, vcs-type=git)
Dec 07 09:43:55 compute-1 systemd[1]: libpod-conmon-0207d5518a90f725563f71a0a206315761b6390cdb1e7bf394e7aaa7899234c7.scope: Deactivated successfully.
Dec 07 09:43:55 compute-1 systemd[1]: Reloading.
Dec 07 09:43:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:55 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f800025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:55 compute-1 systemd-sysv-generator[86822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:43:55 compute-1 systemd-rc-local-generator[86818]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:43:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1e( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1c( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1a( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1d( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.19( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.16( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.f( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.15( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.c( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.a( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.4( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.3( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.e( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.d( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.18( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.2( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.7( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.5( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.6( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.9( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.8( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.b( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.14( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.11( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.10( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.17( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.13( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.12( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1f( empty local-lis/les=22/23 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1e( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.19( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1d( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-mon[80077]: 4.1c deep-scrub starts
Dec 07 09:43:55 compute-1 ceph-mon[80077]: 4.1c deep-scrub ok
Dec 07 09:43:55 compute-1 ceph-mon[80077]: 5.18 deep-scrub starts
Dec 07 09:43:55 compute-1 ceph-mon[80077]: 5.18 deep-scrub ok
Dec 07 09:43:55 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 07 09:43:55 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 07 09:43:55 compute-1 ceph-mon[80077]: osdmap e54: 3 total, 3 up, 3 in
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.16( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.0( empty local-lis/les=53/54 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.7( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.d( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.b( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.14( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.10( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.12( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.17( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=22/22 les/c/f=23/23/0 sis=53) [1] r=0 lpr=53 pi=[22,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:55 compute-1 systemd[1]: Reloading.
Dec 07 09:43:55 compute-1 systemd-rc-local-generator[86858]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:43:55 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 07 09:43:55 compute-1 systemd-sysv-generator[86865]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:43:55 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 07 09:43:55 compute-1 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-1.gawwbe for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:43:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:55 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:55 compute-1 podman[86920]: 2025-12-07 09:43:55.936386427 +0000 UTC m=+0.051544276 container create 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.expose-services=)
Dec 07 09:43:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfa96846fc8b45953170654163bbaf7275a310f8c38f42ff122843e04f74c6e/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:43:56 compute-1 podman[86920]: 2025-12-07 09:43:55.998111445 +0000 UTC m=+0.113269384 container init 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, architecture=x86_64, name=keepalived)
Dec 07 09:43:56 compute-1 podman[86920]: 2025-12-07 09:43:56.00769756 +0000 UTC m=+0.122855439 container start 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, version=2.2.4, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph)
Dec 07 09:43:56 compute-1 podman[86920]: 2025-12-07 09:43:55.915152675 +0000 UTC m=+0.030310554 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 07 09:43:56 compute-1 bash[86920]: 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f
Dec 07 09:43:56 compute-1 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-1.gawwbe for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:43:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe[86935]: Sun Dec  7 09:43:56 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 07 09:43:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe[86935]: Sun Dec  7 09:43:56 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec 07 09:43:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe[86935]: Sun Dec  7 09:43:56 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 07 09:43:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe[86935]: Sun Dec  7 09:43:56 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 07 09:43:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe[86935]: Sun Dec  7 09:43:56 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 07 09:43:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe[86935]: Sun Dec  7 09:43:56 2025: Starting VRRP child process, pid=4
Dec 07 09:43:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe[86935]: Sun Dec  7 09:43:56 2025: Startup complete
Dec 07 09:43:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe[86935]: Sun Dec  7 09:43:56 2025: (VI_0) Entering BACKUP STATE (init)
Dec 07 09:43:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe[86935]: Sun Dec  7 09:43:56 2025: VRRP_Script(check_backend) succeeded
Dec 07 09:43:56 compute-1 sudo[86610]: pam_unix(sudo:session): session closed for user root
Dec 07 09:43:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Dec 07 09:43:56 compute-1 ceph-mon[80077]: 4.16 scrub starts
Dec 07 09:43:56 compute-1 ceph-mon[80077]: 4.16 scrub ok
Dec 07 09:43:56 compute-1 ceph-mon[80077]: 5.16 deep-scrub starts
Dec 07 09:43:56 compute-1 ceph-mon[80077]: 5.16 deep-scrub ok
Dec 07 09:43:56 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 07 09:43:56 compute-1 ceph-mon[80077]: pgmap v55: 182 pgs: 108 unknown, 74 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:43:56 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 07 09:43:56 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 07 09:43:56 compute-1 ceph-mon[80077]: 7.1e scrub starts
Dec 07 09:43:56 compute-1 ceph-mon[80077]: 7.1e scrub ok
Dec 07 09:43:56 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 07 09:43:56 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 07 09:43:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:56 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:57 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:43:57 compute-1 ceph-mon[80077]: 4.17 scrub starts
Dec 07 09:43:57 compute-1 ceph-mon[80077]: 4.17 scrub ok
Dec 07 09:43:57 compute-1 ceph-mon[80077]: 5.9 scrub starts
Dec 07 09:43:57 compute-1 ceph-mon[80077]: 5.9 scrub ok
Dec 07 09:43:57 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:57 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 07 09:43:57 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 07 09:43:57 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 07 09:43:57 compute-1 ceph-mon[80077]: osdmap e55: 3 total, 3 up, 3 in
Dec 07 09:43:57 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 07 09:43:57 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:57 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:43:57 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 07 09:43:57 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 07 09:43:57 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 07 09:43:57 compute-1 ceph-mon[80077]: Deploying daemon keepalived.nfs.cephfs.compute-0.vqhjze on compute-0
Dec 07 09:43:57 compute-1 ceph-mon[80077]: 4.1a scrub starts
Dec 07 09:43:57 compute-1 ceph-mon[80077]: 4.1a scrub ok
Dec 07 09:43:57 compute-1 ceph-mon[80077]: 7.19 scrub starts
Dec 07 09:43:57 compute-1 ceph-mon[80077]: 7.19 scrub ok
Dec 07 09:43:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:43:57.420188) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637420453, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6437, "num_deletes": 254, "total_data_size": 18662298, "memory_usage": 19467552, "flush_reason": "Manual Compaction"}
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Dec 07 09:43:57 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec 07 09:43:57 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637584287, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 11935735, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 6442, "table_properties": {"data_size": 11911244, "index_size": 15545, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 75465, "raw_average_key_size": 24, "raw_value_size": 11850623, "raw_average_value_size": 3783, "num_data_blocks": 690, "num_entries": 3132, "num_filter_entries": 3132, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100474, "oldest_key_time": 1765100474, "file_creation_time": 1765100637, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 164220 microseconds, and 52737 cpu microseconds.
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:43:57.584432) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 11935735 bytes OK
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:43:57.584470) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:43:57.590694) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:43:57.590754) EVENT_LOG_v1 {"time_micros": 1765100637590739, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:43:57.590793) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 18627713, prev total WAL file size 18627713, number of live WAL files 2.
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:43:57.597984) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1648B)]
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637598161, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 11937383, "oldest_snapshot_seqno": -1}
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 2882 keys, 11932349 bytes, temperature: kUnknown
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637702462, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 11932349, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11908439, "index_size": 15609, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7237, "raw_key_size": 72054, "raw_average_key_size": 25, "raw_value_size": 11850903, "raw_average_value_size": 4112, "num_data_blocks": 691, "num_entries": 2882, "num_filter_entries": 2882, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765100637, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:43:57.702858) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 11932349 bytes
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:43:57.704385) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.3 rd, 114.3 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.4, 0.0 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3137, records dropped: 255 output_compression: NoCompression
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:43:57.704406) EVENT_LOG_v1 {"time_micros": 1765100637704395, "job": 4, "event": "compaction_finished", "compaction_time_micros": 104410, "compaction_time_cpu_micros": 54733, "output_level": 6, "num_output_files": 1, "total_output_size": 11932349, "num_input_records": 3137, "num_output_records": 2882, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637707169, "job": 4, "event": "table_file_deletion", "file_number": 14}
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100637707238, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 07 09:43:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:43:57.597802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:43:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:57 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f800025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:58 compute-1 ceph-mon[80077]: 5.10 scrub starts
Dec 07 09:43:58 compute-1 ceph-mon[80077]: 5.10 scrub ok
Dec 07 09:43:58 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 07 09:43:58 compute-1 ceph-mon[80077]: osdmap e56: 3 total, 3 up, 3 in
Dec 07 09:43:58 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 07 09:43:58 compute-1 ceph-mon[80077]: 4.13 scrub starts
Dec 07 09:43:58 compute-1 ceph-mon[80077]: 4.13 scrub ok
Dec 07 09:43:58 compute-1 ceph-mon[80077]: pgmap v58: 244 pgs: 62 unknown, 182 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 07 09:43:58 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 07 09:43:58 compute-1 ceph-mon[80077]: 7.1d scrub starts
Dec 07 09:43:58 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 07 09:43:58 compute-1 ceph-mon[80077]: 7.1d scrub ok
Dec 07 09:43:58 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Dec 07 09:43:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 57 pg[10.0( v 56'1095 (0'0,56'1095] local-lis/les=42/43 n=178 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=11.609794617s) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 56'1094 mlcod 56'1094 active pruub 205.894042969s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:43:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 57 pg[10.0( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=57 pruub=11.609794617s) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 56'1094 mlcod 0'0 unknown pruub 205.894042969s@ mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9db3c48 space 0x5613f9b576d0 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d88208 space 0x5613f9df09d0 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9db8168 space 0x5613f9c0fae0 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d98528 space 0x5613f9df0760 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9db8c08 space 0x5613f9df0aa0 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d98ca8 space 0x5613f9cf7870 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d83e28 space 0x5613f9c0f600 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9da0668 space 0x5613f9df0f80 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9db85c8 space 0x5613f9d8d460 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9db9c48 space 0x5613f9d8dd50 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d82208 space 0x5613f9df12c0 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d83d88 space 0x5613f9e37a10 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9da1e28 space 0x5613f9df1390 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9db9108 space 0x5613f9df0b70 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f97c27a8 space 0x5613f9dc8b70 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9db9428 space 0x5613f9d85ae0 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d98848 space 0x5613f98f2690 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d88de8 space 0x5613f9df0d10 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9db8b68 space 0x5613f9df0de0 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d99f68 space 0x5613f9df05c0 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d83748 space 0x5613f9c1b2c0 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9db8668 space 0x5613f9c03390 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d89568 space 0x5613f9b3e830 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9b634c8 space 0x5613f9df0830 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9db2488 space 0x5613f9b3d2c0 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9da1ec8 space 0x5613f9df1530 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9da0028 space 0x5613f9df0eb0 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f89f5ec8 space 0x5613f9df0690 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d891a8 space 0x5613f9df0c40 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9d99568 space 0x5613f9df04f0 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x5613f906c900) operator()   moving buffer(0x5613f9da0988 space 0x5613f9df1460 0x0~1000 clean)
Dec 07 09:43:58 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec 07 09:43:58 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec 07 09:43:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:58 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f800025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:59 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:43:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.12( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1e( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1f( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1d( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1c( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1a( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.19( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.6( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.5( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.4( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.8( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.b( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.a( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.c( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.f( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.d( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.3( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.9( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.e( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.15( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.7( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.2( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.18( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1b( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.14( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.17( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.16( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.10( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.13( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.11( v 56'1095 lc 0'0 (0'0,56'1095] local-lis/les=42/43 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-mon[80077]: 5.7 scrub starts
Dec 07 09:43:59 compute-1 ceph-mon[80077]: 5.7 scrub ok
Dec 07 09:43:59 compute-1 ceph-mon[80077]: 4.1d scrub starts
Dec 07 09:43:59 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 07 09:43:59 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 07 09:43:59 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 07 09:43:59 compute-1 ceph-mon[80077]: osdmap e57: 3 total, 3 up, 3 in
Dec 07 09:43:59 compute-1 ceph-mon[80077]: 4.1d scrub ok
Dec 07 09:43:59 compute-1 ceph-mon[80077]: 7.16 scrub starts
Dec 07 09:43:59 compute-1 ceph-mon[80077]: 7.16 scrub ok
Dec 07 09:43:59 compute-1 ceph-mon[80077]: 5.1e scrub starts
Dec 07 09:43:59 compute-1 ceph-mon[80077]: 5.1e scrub ok
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1c( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.5( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.4( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.8( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.b( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.0( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 56'1094 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.c( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.3( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.9( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.15( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.2( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.1( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.7( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.10( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.18( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.17( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.11( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.13( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 58 pg[10.14( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=42/42 les/c/f=43/43/0 sis=57) [1] r=0 lpr=57 pi=[42,57)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:43:59 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 07 09:43:59 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 07 09:43:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe[86935]: Sun Dec  7 09:43:59 2025: (VI_0) Entering MASTER STATE
Dec 07 09:43:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe[86935]: Sun Dec  7 09:43:59 2025: (VI_0) Master received advert from 192.168.122.102 with same priority 90 but higher IP address than ours
Dec 07 09:43:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe[86935]: Sun Dec  7 09:43:59 2025: (VI_0) Entering BACKUP STATE
Dec 07 09:43:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:43:59 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Dec 07 09:44:00 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 59 pg[12.0( v 58'1 (0'0,58'1] local-lis/les=46/47 n=1 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=59 pruub=13.996068001s) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 210.320663452s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:00 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 59 pg[12.0( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=59 pruub=13.996068001s) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown pruub 210.320663452s@ mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:00 compute-1 ceph-mon[80077]: osdmap e58: 3 total, 3 up, 3 in
Dec 07 09:44:00 compute-1 ceph-mon[80077]: 4.11 scrub starts
Dec 07 09:44:00 compute-1 ceph-mon[80077]: 4.11 scrub ok
Dec 07 09:44:00 compute-1 ceph-mon[80077]: 7.1a scrub starts
Dec 07 09:44:00 compute-1 ceph-mon[80077]: pgmap v61: 306 pgs: 124 unknown, 182 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 07 09:44:00 compute-1 ceph-mon[80077]: 7.1a scrub ok
Dec 07 09:44:00 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 07 09:44:00 compute-1 ceph-mon[80077]: 5.15 deep-scrub starts
Dec 07 09:44:00 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 07 09:44:00 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 07 09:44:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:00 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:01 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f800025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:01 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Dec 07 09:44:01 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Dec 07 09:44:01 compute-1 sshd-session[86947]: error: kex_exchange_identification: read: Connection reset by peer
Dec 07 09:44:01 compute-1 sshd-session[86947]: Connection reset by 45.140.17.97 port 1301
Dec 07 09:44:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:01 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:02 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 07 09:44:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:02 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:02 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 07 09:44:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:44:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Dec 07 09:44:03 compute-1 ceph-mon[80077]: 5.15 deep-scrub ok
Dec 07 09:44:03 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 07 09:44:03 compute-1 ceph-mon[80077]: 4.14 scrub starts
Dec 07 09:44:03 compute-1 ceph-mon[80077]: osdmap e59: 3 total, 3 up, 3 in
Dec 07 09:44:03 compute-1 ceph-mon[80077]: 4.14 scrub ok
Dec 07 09:44:03 compute-1 ceph-mon[80077]: 7.15 scrub starts
Dec 07 09:44:03 compute-1 ceph-mon[80077]: 7.15 scrub ok
Dec 07 09:44:03 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:03 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.14( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.19( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.18( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1b( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1a( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1c( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1f( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.3( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.2( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.d( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.e( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.c( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.a( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.9( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.b( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.6( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.5( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.13( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.f( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1( v 58'1 (0'0,58'1] local-lis/les=46/47 n=1 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.7( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.8( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.4( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1e( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1d( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.12( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.11( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.10( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.16( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.15( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.17( v 58'1 lc 0'0 (0'0,58'1] local-lis/les=46/47 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.14( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1b( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1c( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.19( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.18( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1f( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1a( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.0( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.2( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.d( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.e( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.3( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.a( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.b( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.9( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.5( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.13( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.f( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1( v 58'1 (0'0,58'1] local-lis/les=59/60 n=1 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.c( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.4( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.7( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.6( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.12( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1d( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.1e( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.11( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.8( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.10( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.16( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.17( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 60 pg[12.15( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=46/46 les/c/f=47/47/0 sis=59) [1] r=0 lpr=59 pi=[46,59)/1 crt=58'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:03 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 07 09:44:03 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 07 09:44:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:03 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f800025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:04 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 07 09:44:04 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 5.1f scrub starts
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 5.1f scrub ok
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 4.12 scrub starts
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 4.12 scrub ok
Dec 07 09:44:04 compute-1 ceph-mon[80077]: pgmap v63: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 246 B/s wr, 5 op/s
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 7.1c deep-scrub starts
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 7.1c deep-scrub ok
Dec 07 09:44:04 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 5.11 deep-scrub starts
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 5.11 deep-scrub ok
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 4.a scrub starts
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 7.c scrub starts
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 4.a scrub ok
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 7.c scrub ok
Dec 07 09:44:04 compute-1 ceph-mon[80077]: osdmap e60: 3 total, 3 up, 3 in
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 5.13 scrub starts
Dec 07 09:44:04 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:04 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:04 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:04 compute-1 ceph-mon[80077]: Deploying daemon alertmanager.compute-0 on compute-0
Dec 07 09:44:04 compute-1 ceph-mon[80077]: pgmap v65: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 199 B/s wr, 4 op/s
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 7.4 scrub starts
Dec 07 09:44:04 compute-1 ceph-mon[80077]: 7.4 scrub ok
Dec 07 09:44:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:04 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:05 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:05 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 07 09:44:05 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 07 09:44:05 compute-1 ceph-mon[80077]: 5.13 scrub ok
Dec 07 09:44:05 compute-1 ceph-mon[80077]: 4.4 scrub starts
Dec 07 09:44:05 compute-1 ceph-mon[80077]: 4.4 scrub ok
Dec 07 09:44:05 compute-1 ceph-mon[80077]: 5.8 deep-scrub starts
Dec 07 09:44:05 compute-1 ceph-mon[80077]: 5.8 deep-scrub ok
Dec 07 09:44:05 compute-1 ceph-mon[80077]: 4.e scrub starts
Dec 07 09:44:05 compute-1 ceph-mon[80077]: 4.e scrub ok
Dec 07 09:44:05 compute-1 ceph-mon[80077]: 7.f scrub starts
Dec 07 09:44:05 compute-1 ceph-mon[80077]: 7.f scrub ok
Dec 07 09:44:05 compute-1 ceph-mon[80077]: 5.2 scrub starts
Dec 07 09:44:05 compute-1 ceph-mon[80077]: 5.2 scrub ok
Dec 07 09:44:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:05 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:06 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 07 09:44:06 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 07 09:44:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:06 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f800025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:07 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f880091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:07 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 07 09:44:07 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 07 09:44:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:07 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:07 compute-1 ceph-mon[80077]: 4.19 scrub starts
Dec 07 09:44:07 compute-1 ceph-mon[80077]: 4.19 scrub ok
Dec 07 09:44:07 compute-1 ceph-mon[80077]: 7.1b scrub starts
Dec 07 09:44:07 compute-1 ceph-mon[80077]: 7.1b scrub ok
Dec 07 09:44:07 compute-1 ceph-mon[80077]: pgmap v66: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 167 B/s wr, 3 op/s
Dec 07 09:44:07 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:07 compute-1 ceph-mon[80077]: 5.17 scrub starts
Dec 07 09:44:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.424855232s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.305984497s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.424810410s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.305984497s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.13( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.272777557s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.154022217s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.1f( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.272816658s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.154052734s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.13( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.272763252s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.154022217s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.1f( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.272769928s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.154052734s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.18( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.168182373s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.049530029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.18( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.168170929s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.049530029s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.424571037s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.305969238s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.424558640s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.305969238s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.11( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.272494316s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.154022217s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.11( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.272482872s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.154022217s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.10( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.272463799s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.154037476s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.1a( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.168078423s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.049652100s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.10( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.272452354s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.154037476s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.1c( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167867661s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.049484253s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.1a( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.168050766s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.049652100s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.1c( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167856216s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.049484253s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.429414749s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311233521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.14( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.272137642s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153961182s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.429403305s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311233521s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.b( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.271945953s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153884888s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.b( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.271937370s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153884888s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.3( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.168243408s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050186157s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.14( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.272126198s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153961182s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.8( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.272061348s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.154098511s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.8( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.272051811s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.154098511s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.5( v 59'1098 (0'0,59'1098] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.429200172s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 59'1097 mlcod 59'1097 active pruub 219.311264038s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.3( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.168233871s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050186157s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.5( v 59'1098 (0'0,59'1098] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.429150581s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 59'1097 mlcod 0'0 unknown NOTIFY pruub 219.311264038s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.9( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.271615028s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153793335s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.9( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.271602631s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153793335s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.6( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.271440506s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153732300s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.6( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.271430969s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153732300s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.2( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167786598s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050109863s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.19( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167152405s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.049499512s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.2( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167775154s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050109863s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.e( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167770386s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050186157s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.e( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167758942s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050186157s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.b( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.429192543s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311416626s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.b( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.428895950s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311416626s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.5( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.271095276s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153717041s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.5( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.271083832s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153717041s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.a( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167542458s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050247192s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.a( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167530060s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050247192s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.c( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167438507s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050186157s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.b( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167472839s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050277710s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.b( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167461395s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050277710s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.19( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.166961670s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.049499512s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.428953171s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311813354s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.c( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167368889s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050186157s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.9( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167361259s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050308228s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.9( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167349815s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050308228s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.428465843s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311523438s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.428446770s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311523438s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.428940773s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311813354s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.2( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.270612717s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153793335s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.6( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167095184s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050308228s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.6( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167075157s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050308228s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.2( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.270509720s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153793335s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.13( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.166854858s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050323486s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.15( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.428374290s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311889648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.15( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.428359985s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311889648s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.3( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.428133011s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311706543s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.13( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.166837692s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050323486s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.e( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.269895554s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153533936s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.e( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.269867897s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153533936s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.8( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167074203s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050781250s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.8( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.167051315s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050781250s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.3( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.427974701s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311706543s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.9( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.427883148s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311798096s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.9( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.427867889s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311798096s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.4( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.269410133s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153396606s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.4( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.269384384s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153396606s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.7( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.427926064s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311981201s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.7( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.427913666s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311981201s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.3( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.269462585s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153518677s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.18( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.269330978s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153533936s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.3( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.269436836s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153518677s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.18( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.269279480s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153533936s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.1( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.427552223s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311950684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.7( v 60'2 (0'0,60'2] local-lis/les=59/60 n=1 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165957451s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=60'2 lcod 58'1 mlcod 58'1 active pruub 215.050354004s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.4( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165974617s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050384521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.4( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165961266s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050384521s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.1( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.427537918s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311950684s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.f( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.268745422s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153411865s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.7( v 60'2 (0'0,60'2] local-lis/les=59/60 n=1 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165921211s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=60'2 lcod 58'1 mlcod 0'0 unknown NOTIFY pruub 215.050354004s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.f( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.268720627s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153411865s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.1e( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.166083336s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050827026s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.1e( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.166068077s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050827026s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.a( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.268543243s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153396606s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.a( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.268524170s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153396606s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.16( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.268096924s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.152984619s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.16( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.268083572s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.152984619s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.12( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165744781s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050735474s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.17( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.427183151s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.312210083s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.1d( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165381432s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050415039s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.17( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.427172661s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.312210083s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.1d( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165364265s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050415039s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.12( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165715218s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050735474s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.10( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165682793s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050872803s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.11( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165638924s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050842285s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.10( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165670395s) [0] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050872803s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.11( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165624619s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050842285s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.426906586s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311965942s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.1b( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.268079758s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.153396606s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.426696777s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311965942s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.1b( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.268066406s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.153396606s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.17( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165520668s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 active pruub 215.050918579s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.1d( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.260132790s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.145614624s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.1d( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.260119438s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.145614624s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[12.17( v 58'1 (0'0,58'1] local-lis/les=59/60 n=0 ec=59/46 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=11.165494919s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=58'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.050918579s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.13( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.426539421s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.312072754s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.13( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.426523209s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.312072754s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.1e( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.260010719s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 215.145584106s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[7.1e( empty local-lis/les=53/54 n=0 ec=53/22 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.259999275s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 215.145584106s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.11( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.426543236s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.312286377s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[10.11( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=15.426505089s) [2] r=-1 lpr=61 pi=[57,61)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.312286377s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[9.10( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[8.10( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[4.1b( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[8.17( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[9.11( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[8.1b( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[8.18( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.10( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[4.13( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.18( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[11.1d( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.9( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.16( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[4.e( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.7( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.1c( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.11( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[11.5( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.15( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[4.1a( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.2( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[4.a( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.f( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[8.8( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.1f( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.1b( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[4.d( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[5.1( empty local-lis/les=0/0 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[4.18( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[8.14( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[4.5( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[9.a( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[4.c( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[11.7( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[8.4( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=0/0 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[8.19( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[9.12( empty local-lis/les=0/0 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 61 pg[8.12( empty local-lis/les=0/0 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:44:08 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 07 09:44:08 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 07 09:44:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:08 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 5.17 scrub ok
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 4.9 scrub starts
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 4.9 scrub ok
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 7.a scrub starts
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 7.a scrub ok
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 5.c scrub starts
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 5.c scrub ok
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 4.18 scrub starts
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 4.18 scrub ok
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 7.3 scrub starts
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 7.3 scrub ok
Dec 07 09:44:08 compute-1 ceph-mon[80077]: pgmap v67: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 255 B/s wr, 3 op/s
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 07 09:44:08 compute-1 ceph-mon[80077]: osdmap e61: 3 total, 3 up, 3 in
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 10.12 scrub starts
Dec 07 09:44:08 compute-1 ceph-mon[80077]: 10.12 scrub ok
Dec 07 09:44:08 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:08 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.13( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.13( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.11( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.11( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.17( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.17( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.1( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.7( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.1( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.7( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.15( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.15( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.3( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.3( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.9( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.9( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.b( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.b( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.5( v 59'1098 (0'0,59'1098] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 59'1097 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.5( v 59'1098 (0'0,59'1098] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 59'1097 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[8.10( v 59'48 lc 48'14 (0'0,59'48] local-lis/les=61/62 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=59'48 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[11.1d( v 56'72 (0'0,56'72] local-lis/les=61/62 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[11.1b( v 56'72 (0'0,56'72] local-lis/les=61/62 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[8.18( v 48'45 lc 48'19 (0'0,48'45] local-lis/les=61/62 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[11.1c( v 56'72 (0'0,56'72] local-lis/les=61/62 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[8.1b( v 48'45 lc 48'8 (0'0,48'45] local-lis/les=61/62 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[11.7( v 56'72 (0'0,56'72] local-lis/les=61/62 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[11.1e( v 56'72 (0'0,56'72] local-lis/les=61/62 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[8.4( v 48'45 (0'0,48'45] local-lis/les=61/62 n=1 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[9.6( v 41'9 lc 41'5 (0'0,41'9] local-lis/les=61/62 n=1 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[11.4( v 56'72 (0'0,56'72] local-lis/les=61/62 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[11.5( v 56'72 (0'0,56'72] local-lis/les=61/62 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[8.8( v 48'45 (0'0,48'45] local-lis/les=61/62 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[9.11( v 41'9 (0'0,41'9] local-lis/les=61/62 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[4.5( empty local-lis/les=61/62 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[9.e( v 41'9 (0'0,41'9] local-lis/les=61/62 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[11.1( v 56'72 (0'0,56'72] local-lis/les=61/62 n=1 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[9.f( v 41'9 lc 0'0 (0'0,41'9] local-lis/les=61/62 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=41'9 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[8.17( v 48'45 (0'0,48'45] local-lis/les=61/62 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[4.1b( empty local-lis/les=61/62 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[11.14( v 60'78 lc 59'77 (0'0,60'78] local-lis/les=61/62 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=60'78 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[9.d( v 41'9 (0'0,41'9] local-lis/les=61/62 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[4.a( empty local-lis/les=61/62 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[11.f( v 56'72 (0'0,56'72] local-lis/les=61/62 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[4.e( empty local-lis/les=61/62 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[9.a( v 41'9 (0'0,41'9] local-lis/les=61/62 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[8.19( v 48'45 (0'0,48'45] local-lis/les=61/62 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[4.c( empty local-lis/les=61/62 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[4.1a( empty local-lis/les=61/62 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[4.18( empty local-lis/les=61/62 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[8.14( v 48'45 (0'0,48'45] local-lis/les=61/62 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[9.15( v 41'9 (0'0,41'9] local-lis/les=61/62 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[8.12( v 48'45 (0'0,48'45] local-lis/les=61/62 n=0 ec=55/37 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=48'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[9.12( v 41'9 (0'0,41'9] local-lis/les=61/62 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[9.10( v 41'9 (0'0,41'9] local-lis/les=61/62 n=0 ec=55/40 lis/c=55/55 les/c/f=56/56/0 sis=61) [1] r=0 lpr=61 pi=[55,61)/1 crt=41'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[11.12( v 56'72 (0'0,56'72] local-lis/les=61/62 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.1c( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.1f( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[4.d( empty local-lis/les=61/62 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[11.1a( v 56'72 (0'0,56'72] local-lis/les=61/62 n=0 ec=57/44 lis/c=57/57 les/c/f=58/58/0 sis=61) [1] r=0 lpr=61 pi=[57,61)/1 crt=56'72 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.1b( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.16( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.18( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.11( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.10( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[4.13( empty local-lis/les=61/62 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.1( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.2( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.f( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.7( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.9( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:08 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 62 pg[5.15( empty local-lis/les=61/62 n=0 ec=51/20 lis/c=51/51 les/c/f=52/52/0 sis=61) [1] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:09 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:09 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f880091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:09 compute-1 ceph-mon[80077]: 5.1a scrub starts
Dec 07 09:44:09 compute-1 ceph-mon[80077]: 5.1a scrub ok
Dec 07 09:44:09 compute-1 ceph-mon[80077]: 11.15 scrub starts
Dec 07 09:44:09 compute-1 ceph-mon[80077]: 11.15 scrub ok
Dec 07 09:44:09 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:09 compute-1 ceph-mon[80077]: Regenerating cephadm self-signed grafana TLS certificates
Dec 07 09:44:09 compute-1 ceph-mon[80077]: osdmap e62: 3 total, 3 up, 3 in
Dec 07 09:44:09 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:09 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:09 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 07 09:44:09 compute-1 ceph-mon[80077]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 07 09:44:09 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:09 compute-1 ceph-mon[80077]: Deploying daemon grafana.compute-0 on compute-0
Dec 07 09:44:09 compute-1 ceph-mon[80077]: pgmap v70: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 475 B/s rd, 158 B/s wr, 0 op/s
Dec 07 09:44:09 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 07 09:44:09 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 07 09:44:09 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[6.a( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [1] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[6.6( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [1] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[6.2( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [1] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[6.e( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [1] r=0 lpr=63 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.410688400s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.312057495s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.410663605s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.312057495s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.2( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.409760475s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311889648s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.2( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.409698486s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311889648s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.409857750s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.312271118s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.409834862s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.312271118s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.407942772s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311401367s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.407908440s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311401367s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.406905174s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311248779s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.406809807s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311248779s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.406424522s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311004639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.406399727s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311004639s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.405790329s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.310867310s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=4 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.406029701s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 219.311187744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.405747414s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.310867310s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=4 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.405997276s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 219.311187744s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.11( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.17( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.7( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.13( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.15( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.3( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.9( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.5( v 59'1098 (0'0,59'1098] local-lis/les=62/63 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=59'1098 lcod 59'1097 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.b( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:09 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 63 pg[10.1( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[57,62)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:10 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.13( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.004767418s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.916732788s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.13( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.004690170s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.916732788s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.11( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=14.995632172s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.907852173s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.11( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=14.995583534s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.907852173s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.17( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.004186630s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.916641235s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.17( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.004137039s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.916641235s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.004071236s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.916641235s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.004043579s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.916641235s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.2( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.7( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.003871918s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.916671753s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.2( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.7( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.003833771s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.916671753s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.9( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.003837585s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.916839600s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.9( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.003770828s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.916839600s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.15( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.003657341s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.916839600s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.15( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.003608704s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.916839600s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.3( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.003323555s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.916854858s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.003014565s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.916976929s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.3( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.003209114s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.916854858s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.002962112s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.916976929s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.002843857s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.917251587s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.002807617s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.917251587s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.002192497s) [2] async=[2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.917312622s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=64 pruub=15.002129555s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.917312622s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=4 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=4 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:10 compute-1 ceph-mon[80077]: 11.18 scrub starts
Dec 07 09:44:10 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 07 09:44:10 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 07 09:44:10 compute-1 ceph-mon[80077]: osdmap e63: 3 total, 3 up, 3 in
Dec 07 09:44:10 compute-1 ceph-mon[80077]: 11.18 scrub ok
Dec 07 09:44:10 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[6.e( v 46'39 lc 45'19 (0'0,46'39] local-lis/les=63/64 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [1] r=0 lpr=63 pi=[53,63)/1 crt=46'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[6.2( v 46'39 (0'0,46'39] local-lis/les=63/64 n=2 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [1] r=0 lpr=63 pi=[53,63)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[6.a( v 46'39 (0'0,46'39] local-lis/les=63/64 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [1] r=0 lpr=63 pi=[53,63)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:10 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 64 pg[6.6( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=63/64 n=2 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=63) [1] r=0 lpr=63 pi=[53,63)/1 crt=46'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:11 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:11 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 07 09:44:11 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 07 09:44:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:11 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f580016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:11 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=13.994749069s) [2] async=[2] r=-1 lpr=65 pi=[57,65)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.917343140s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=13.994654655s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.917343140s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.1( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=13.994986534s) [2] async=[2] r=-1 lpr=65 pi=[57,65)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.917572021s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.1( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=13.994482994s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.917572021s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.b( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=13.993988037s) [2] async=[2] r=-1 lpr=65 pi=[57,65)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.917541504s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.b( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=13.993908882s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.917541504s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.5( v 63'1101 (0'0,63'1101] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=13.993291855s) [2] async=[2] r=-1 lpr=65 pi=[57,65)/1 crt=59'1098 lcod 63'1100 mlcod 63'1100 active pruub 221.917404175s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.5( v 63'1101 (0'0,63'1101] local-lis/les=62/63 n=6 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=13.993162155s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=59'1098 lcod 63'1100 mlcod 0'0 unknown NOTIFY pruub 221.917404175s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=13.992555618s) [2] async=[2] r=-1 lpr=65 pi=[57,65)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 221.917480469s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=62/63 n=5 ec=57/42 lis/c=62/57 les/c/f=63/58/0 sis=65 pruub=13.992448807s) [2] r=-1 lpr=65 pi=[57,65)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.917480469s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=4 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.2( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 65 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[57,64)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:11 compute-1 ceph-mon[80077]: 9.1a scrub starts
Dec 07 09:44:11 compute-1 ceph-mon[80077]: 9.1a scrub ok
Dec 07 09:44:11 compute-1 ceph-mon[80077]: osdmap e64: 3 total, 3 up, 3 in
Dec 07 09:44:11 compute-1 ceph-mon[80077]: 6.2 scrub starts
Dec 07 09:44:11 compute-1 ceph-mon[80077]: 6.2 scrub ok
Dec 07 09:44:11 compute-1 ceph-mon[80077]: pgmap v73: 337 pgs: 1 active+recovery_wait, 1 active+recovering+remapped, 8 unknown, 4 active+recovery_wait+remapped, 1 active+recovery_wait+degraded, 11 active+remapped, 4 peering, 1 active+recovering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/179 objects degraded (0.559%); 32/179 objects misplaced (17.877%); 857 B/s, 2 keys/s, 20 objects/s recovering
Dec 07 09:44:11 compute-1 ceph-mon[80077]: osdmap e65: 3 total, 3 up, 3 in
Dec 07 09:44:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:12 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88009ec0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:12 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 07 09:44:12 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 07 09:44:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.004883766s) [0] async=[0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 223.935638428s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.004828453s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.935638428s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.004806519s) [0] async=[0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 223.935668945s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.2( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.004806519s) [0] async=[0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 223.935699463s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.005069733s) [0] async=[0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 223.935974121s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.004727364s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.935668945s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.2( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.004705429s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.935699463s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.004963875s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.935974121s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.004359245s) [0] async=[0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 223.935714722s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.004276276s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.935714722s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=4 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=14.998037338s) [0] async=[0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 223.929641724s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.12( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=4 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=14.997914314s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.929641724s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.003547668s) [0] async=[0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 223.936172485s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=6 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.003436089s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.936172485s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.002261162s) [0] async=[0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 223.935791016s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 66 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=64/65 n=5 ec=57/42 lis/c=64/57 les/c/f=65/58/0 sis=66 pruub=15.002200127s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.935791016s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:44:12 compute-1 ceph-mon[80077]: 6.d scrub starts
Dec 07 09:44:12 compute-1 ceph-mon[80077]: 6.d scrub ok
Dec 07 09:44:12 compute-1 ceph-mon[80077]: 9.1b scrub starts
Dec 07 09:44:12 compute-1 ceph-mon[80077]: 9.1b scrub ok
Dec 07 09:44:12 compute-1 ceph-mon[80077]: Health check failed: Degraded data redundancy: 1/179 objects degraded (0.559%), 1 pg degraded (PG_DEGRADED)
Dec 07 09:44:12 compute-1 ceph-mon[80077]: osdmap e66: 3 total, 3 up, 3 in
Dec 07 09:44:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:13 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74002f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:13 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 07 09:44:13 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 07 09:44:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:13 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:13 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Dec 07 09:44:14 compute-1 ceph-mon[80077]: 6.7 scrub starts
Dec 07 09:44:14 compute-1 ceph-mon[80077]: 6.7 scrub ok
Dec 07 09:44:14 compute-1 ceph-mon[80077]: 8.1a scrub starts
Dec 07 09:44:14 compute-1 ceph-mon[80077]: 8.1a scrub ok
Dec 07 09:44:14 compute-1 ceph-mon[80077]: 6.a scrub starts
Dec 07 09:44:14 compute-1 ceph-mon[80077]: 6.a scrub ok
Dec 07 09:44:14 compute-1 ceph-mon[80077]: 6.3 scrub starts
Dec 07 09:44:14 compute-1 ceph-mon[80077]: pgmap v76: 337 pgs: 1 active+recovery_wait, 1 active+recovering+remapped, 8 unknown, 4 active+recovery_wait+remapped, 1 active+recovery_wait+degraded, 11 active+remapped, 4 peering, 1 active+recovering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/179 objects degraded (0.559%); 32/179 objects misplaced (17.877%); 857 B/s, 2 keys/s, 20 objects/s recovering
Dec 07 09:44:14 compute-1 ceph-mon[80077]: 6.e scrub starts
Dec 07 09:44:14 compute-1 ceph-mon[80077]: 6.e scrub ok
Dec 07 09:44:14 compute-1 ceph-mon[80077]: osdmap e67: 3 total, 3 up, 3 in
Dec 07 09:44:14 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Dec 07 09:44:14 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Dec 07 09:44:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:14 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f580016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:15 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88009ec0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:15 compute-1 ceph-mon[80077]: 6.3 scrub ok
Dec 07 09:44:15 compute-1 ceph-mon[80077]: 9.19 scrub starts
Dec 07 09:44:15 compute-1 ceph-mon[80077]: 9.19 scrub ok
Dec 07 09:44:15 compute-1 ceph-mon[80077]: 6.5 scrub starts
Dec 07 09:44:15 compute-1 ceph-mon[80077]: 6.5 scrub ok
Dec 07 09:44:15 compute-1 ceph-mon[80077]: 12.14 scrub starts
Dec 07 09:44:15 compute-1 ceph-mon[80077]: 12.14 scrub ok
Dec 07 09:44:15 compute-1 ceph-mon[80077]: 5.4 scrub starts
Dec 07 09:44:15 compute-1 ceph-mon[80077]: 5.4 scrub ok
Dec 07 09:44:15 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 07 09:44:15 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 07 09:44:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:15 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74002f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:16 compute-1 ceph-mon[80077]: 9.1e deep-scrub starts
Dec 07 09:44:16 compute-1 ceph-mon[80077]: 9.1e deep-scrub ok
Dec 07 09:44:16 compute-1 ceph-mon[80077]: 7.12 scrub starts
Dec 07 09:44:16 compute-1 ceph-mon[80077]: 7.12 scrub ok
Dec 07 09:44:16 compute-1 ceph-mon[80077]: pgmap v78: 337 pgs: 1 active+recovery_wait, 1 active+recovering+remapped, 8 unknown, 4 active+recovery_wait+remapped, 1 active+recovery_wait+degraded, 11 active+remapped, 4 peering, 1 active+recovering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1/179 objects degraded (0.559%); 32/179 objects misplaced (17.877%)
Dec 07 09:44:16 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:16 compute-1 ceph-mon[80077]: 5.e scrub starts
Dec 07 09:44:16 compute-1 ceph-mon[80077]: 5.e scrub ok
Dec 07 09:44:16 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Dec 07 09:44:16 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Dec 07 09:44:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:16 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:17 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f580016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:17 compute-1 ceph-mon[80077]: 9.1f scrub starts
Dec 07 09:44:17 compute-1 ceph-mon[80077]: 9.1f scrub ok
Dec 07 09:44:17 compute-1 ceph-mon[80077]: 12.1b scrub starts
Dec 07 09:44:17 compute-1 ceph-mon[80077]: 12.1b scrub ok
Dec 07 09:44:17 compute-1 ceph-mon[80077]: 5.0 scrub starts
Dec 07 09:44:17 compute-1 ceph-mon[80077]: 5.0 scrub ok
Dec 07 09:44:17 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Dec 07 09:44:17 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Dec 07 09:44:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:17 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88009ec0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:44:18 compute-1 ceph-mon[80077]: 8.1e scrub starts
Dec 07 09:44:18 compute-1 ceph-mon[80077]: 8.1e scrub ok
Dec 07 09:44:18 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:18 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:18 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:18 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:18 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:18 compute-1 ceph-mon[80077]: Deploying daemon haproxy.rgw.default.compute-0.toeiml on compute-0
Dec 07 09:44:18 compute-1 ceph-mon[80077]: 12.1f scrub starts
Dec 07 09:44:18 compute-1 ceph-mon[80077]: pgmap v79: 337 pgs: 337 active+clean; 455 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 92 op/s; 325 B/s, 11 objects/s recovering
Dec 07 09:44:18 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 07 09:44:18 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 07 09:44:18 compute-1 ceph-mon[80077]: 12.1f scrub ok
Dec 07 09:44:18 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Dec 07 09:44:18 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 68 pg[6.b( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=61/61 les/c/f=62/63/0 sis=68) [1] r=0 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:18 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 68 pg[6.3( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=61/61 les/c/f=62/63/0 sis=68) [1] r=0 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:18 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 68 pg[6.7( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=61/61 les/c/f=62/63/0 sis=68) [1] r=0 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:18 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 68 pg[6.f( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=61/61 les/c/f=62/62/0 sis=68) [1] r=0 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:18 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 07 09:44:18 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 07 09:44:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:18 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:19 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:19 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.0 deep-scrub starts
Dec 07 09:44:19 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.0 deep-scrub ok
Dec 07 09:44:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.003000080s ======
Dec 07 09:44:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec 07 09:44:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:19 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:20 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.d deep-scrub starts
Dec 07 09:44:20 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.d deep-scrub ok
Dec 07 09:44:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88009ec0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Dec 07 09:44:20 compute-1 ceph-mon[80077]: 9.1c scrub starts
Dec 07 09:44:20 compute-1 ceph-mon[80077]: 9.1c scrub ok
Dec 07 09:44:20 compute-1 ceph-mon[80077]: 5.b scrub starts
Dec 07 09:44:20 compute-1 ceph-mon[80077]: 5.b scrub ok
Dec 07 09:44:20 compute-1 ceph-mon[80077]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/179 objects degraded (0.559%), 1 pg degraded)
Dec 07 09:44:20 compute-1 ceph-mon[80077]: Cluster is now healthy
Dec 07 09:44:20 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 07 09:44:20 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 07 09:44:20 compute-1 ceph-mon[80077]: osdmap e68: 3 total, 3 up, 3 in
Dec 07 09:44:20 compute-1 ceph-mon[80077]: 7.17 scrub starts
Dec 07 09:44:20 compute-1 ceph-mon[80077]: 7.17 scrub ok
Dec 07 09:44:20 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 69 pg[6.f( v 46'39 lc 45'1 (0'0,46'39] local-lis/les=68/69 n=1 ec=53/21 lis/c=61/61 les/c/f=62/62/0 sis=68) [1] r=0 lpr=68 pi=[61,68)/1 crt=46'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:20 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 69 pg[6.3( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=68/69 n=2 ec=53/21 lis/c=61/61 les/c/f=62/63/0 sis=68) [1] r=0 lpr=68 pi=[61,68)/1 crt=46'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:20 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 69 pg[6.b( v 46'39 lc 0'0 (0'0,46'39] local-lis/les=68/69 n=1 ec=53/21 lis/c=61/61 les/c/f=62/63/0 sis=68) [1] r=0 lpr=68 pi=[61,68)/1 crt=46'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:20 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 69 pg[6.7( v 46'39 lc 45'21 (0'0,46'39] local-lis/les=68/69 n=1 ec=53/21 lis/c=61/61 les/c/f=62/63/0 sis=68) [1] r=0 lpr=68 pi=[61,68)/1 crt=46'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:21 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:21 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 07 09:44:21 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 07 09:44:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:21.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:21 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Dec 07 09:44:21 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 70 pg[10.14( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=9.428574562s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 227.312454224s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:21 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 70 pg[10.14( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=9.428528786s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.312454224s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:21 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 70 pg[10.c( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=9.427244186s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 227.311782837s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:21 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 70 pg[10.c( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=9.427208900s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.311782837s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:21 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 70 pg[10.4( v 60'1100 (0'0,60'1100] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=9.426750183s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=60'1100 lcod 59'1099 mlcod 59'1099 active pruub 227.311538696s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:21 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 70 pg[10.4( v 60'1100 (0'0,60'1100] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=9.426703453s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=60'1100 lcod 59'1099 mlcod 0'0 unknown NOTIFY pruub 227.311538696s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:21 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 70 pg[10.1c( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=9.425880432s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 227.311004639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:21 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 70 pg[10.1c( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=70 pruub=9.425842285s) [2] r=-1 lpr=70 pi=[57,70)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 227.311004639s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 8.1d scrub starts
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 8.1d scrub ok
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 5.12 scrub starts
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 5.12 scrub ok
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 4.b scrub starts
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 4.b scrub ok
Dec 07 09:44:21 compute-1 ceph-mon[80077]: pgmap v81: 337 pgs: 337 active+clean; 455 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 0 B/s wr, 85 op/s; 300 B/s, 10 objects/s recovering
Dec 07 09:44:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 07 09:44:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 12.0 deep-scrub starts
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 12.0 deep-scrub ok
Dec 07 09:44:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 5.d scrub starts
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 5.d scrub ok
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 8.7 scrub starts
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 12.d deep-scrub starts
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 12.d deep-scrub ok
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 8.7 scrub ok
Dec 07 09:44:21 compute-1 ceph-mon[80077]: osdmap e69: 3 total, 3 up, 3 in
Dec 07 09:44:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:21 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:21 compute-1 ceph-mon[80077]: Deploying daemon haproxy.rgw.default.compute-2.soidop on compute-2
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 7.7 scrub starts
Dec 07 09:44:21 compute-1 ceph-mon[80077]: 7.7 scrub ok
Dec 07 09:44:21 compute-1 ceph-mon[80077]: pgmap v83: 337 pgs: 1 active+clean+scrubbing, 4 peering, 332 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 73 op/s; 261 B/s, 9 objects/s recovering
Dec 07 09:44:22 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 07 09:44:22 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 07 09:44:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:22 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:22 compute-1 ceph-mon[80077]: 8.1f scrub starts
Dec 07 09:44:22 compute-1 ceph-mon[80077]: 11.6 scrub starts
Dec 07 09:44:22 compute-1 ceph-mon[80077]: 8.1f scrub ok
Dec 07 09:44:22 compute-1 ceph-mon[80077]: 11.6 scrub ok
Dec 07 09:44:22 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 07 09:44:22 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 07 09:44:22 compute-1 ceph-mon[80077]: osdmap e70: 3 total, 3 up, 3 in
Dec 07 09:44:22 compute-1 ceph-mon[80077]: 7.1 scrub starts
Dec 07 09:44:22 compute-1 ceph-mon[80077]: 7.1 scrub ok
Dec 07 09:44:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Dec 07 09:44:22 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 71 pg[10.1c( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:22 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 71 pg[10.1c( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:22 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 71 pg[10.4( v 60'1100 (0'0,60'1100] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=60'1100 lcod 59'1099 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:22 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 71 pg[10.4( v 60'1100 (0'0,60'1100] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=60'1100 lcod 59'1099 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:22 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 71 pg[10.c( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:22 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 71 pg[10.c( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:22 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 71 pg[10.14( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:22 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 71 pg[10.14( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] r=0 lpr=71 pi=[57,71)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:44:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88009ec0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.151259) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100663151320, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1112, "num_deletes": 251, "total_data_size": 1919168, "memory_usage": 1940512, "flush_reason": "Manual Compaction"}
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100663166046, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 1211916, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6447, "largest_seqno": 7554, "table_properties": {"data_size": 1206500, "index_size": 2620, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15161, "raw_average_key_size": 22, "raw_value_size": 1194170, "raw_average_value_size": 1735, "num_data_blocks": 116, "num_entries": 688, "num_filter_entries": 688, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100638, "oldest_key_time": 1765100638, "file_creation_time": 1765100663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 14842 microseconds, and 4921 cpu microseconds.
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.166108) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 1211916 bytes OK
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.166133) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.167932) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.167954) EVENT_LOG_v1 {"time_micros": 1765100663167947, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.167977) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1913077, prev total WAL file size 1913424, number of live WAL files 2.
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.169944) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(1183KB)], [15(11MB)]
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100663170018, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 13144265, "oldest_snapshot_seqno": -1}
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3044 keys, 11925729 bytes, temperature: kUnknown
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100663301982, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 11925729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11900960, "index_size": 16084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7621, "raw_key_size": 78016, "raw_average_key_size": 25, "raw_value_size": 11840418, "raw_average_value_size": 3889, "num_data_blocks": 703, "num_entries": 3044, "num_filter_entries": 3044, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765100663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.302291) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11925729 bytes
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.304647) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 99.5 rd, 90.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.4 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(20.7) write-amplify(9.8) OK, records in: 3570, records dropped: 526 output_compression: NoCompression
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.304680) EVENT_LOG_v1 {"time_micros": 1765100663304665, "job": 6, "event": "compaction_finished", "compaction_time_micros": 132050, "compaction_time_cpu_micros": 51489, "output_level": 6, "num_output_files": 1, "total_output_size": 11925729, "num_input_records": 3570, "num_output_records": 3044, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100663305421, "job": 6, "event": "table_file_deletion", "file_number": 17}
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100663309454, "job": 6, "event": "table_file_deletion", "file_number": 15}
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.169823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.309531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.309539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.309542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.309545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:44:23 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:44:23.309548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:44:23 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 07 09:44:23 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 07 09:44:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:23.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:23.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:23 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Dec 07 09:44:23 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 72 pg[10.c( v 56'1095 (0'0,56'1095] local-lis/les=71/72 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:23 compute-1 ceph-mon[80077]: 11.13 scrub starts
Dec 07 09:44:23 compute-1 ceph-mon[80077]: 11.13 scrub ok
Dec 07 09:44:23 compute-1 ceph-mon[80077]: 9.4 scrub starts
Dec 07 09:44:23 compute-1 ceph-mon[80077]: 9.4 scrub ok
Dec 07 09:44:23 compute-1 ceph-mon[80077]: osdmap e71: 3 total, 3 up, 3 in
Dec 07 09:44:23 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:23 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:23 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:23 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:23 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 07 09:44:23 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 07 09:44:23 compute-1 ceph-mon[80077]: Deploying daemon keepalived.rgw.default.compute-2.qnwhtu on compute-2
Dec 07 09:44:23 compute-1 ceph-mon[80077]: pgmap v86: 337 pgs: 1 active+clean+scrubbing, 4 peering, 332 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:44:23 compute-1 ceph-mon[80077]: 7.0 scrub starts
Dec 07 09:44:23 compute-1 ceph-mon[80077]: 7.0 scrub ok
Dec 07 09:44:23 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 72 pg[10.14( v 56'1095 (0'0,56'1095] local-lis/les=71/72 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:23 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 72 pg[10.4( v 60'1100 (0'0,60'1100] local-lis/les=71/72 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=60'1100 lcod 59'1099 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:23 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 72 pg[10.1c( v 56'1095 (0'0,56'1095] local-lis/les=71/72 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[57,71)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:24 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 07 09:44:24 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 07 09:44:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:24 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:25 compute-1 ceph-mon[80077]: 8.6 scrub starts
Dec 07 09:44:25 compute-1 ceph-mon[80077]: 8.6 scrub ok
Dec 07 09:44:25 compute-1 ceph-mon[80077]: 11.b scrub starts
Dec 07 09:44:25 compute-1 ceph-mon[80077]: 11.b scrub ok
Dec 07 09:44:25 compute-1 ceph-mon[80077]: osdmap e72: 3 total, 3 up, 3 in
Dec 07 09:44:25 compute-1 ceph-mon[80077]: 7.d scrub starts
Dec 07 09:44:25 compute-1 ceph-mon[80077]: 7.d scrub ok
Dec 07 09:44:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Dec 07 09:44:25 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 73 pg[10.14( v 56'1095 (0'0,56'1095] local-lis/les=71/72 n=5 ec=57/42 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=14.960506439s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 235.949249268s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:25 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 73 pg[10.14( v 56'1095 (0'0,56'1095] local-lis/les=71/72 n=5 ec=57/42 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=14.960450172s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.949249268s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:25 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 73 pg[10.c( v 56'1095 (0'0,56'1095] local-lis/les=71/72 n=6 ec=57/42 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=14.954835892s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 235.943923950s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:25 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 73 pg[10.4( v 72'1104 (0'0,72'1104] local-lis/les=71/72 n=6 ec=57/42 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=14.960099220s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=60'1100 lcod 72'1103 mlcod 72'1103 active pruub 235.949264526s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:25 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 73 pg[10.4( v 72'1104 (0'0,72'1104] local-lis/les=71/72 n=6 ec=57/42 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=14.960040092s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=60'1100 lcod 72'1103 mlcod 0'0 unknown NOTIFY pruub 235.949264526s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:25 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 73 pg[10.c( v 56'1095 (0'0,56'1095] local-lis/les=71/72 n=6 ec=57/42 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=14.954689980s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.943923950s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:25 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 73 pg[10.1c( v 56'1095 (0'0,56'1095] local-lis/les=71/72 n=5 ec=57/42 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=14.959450722s) [2] async=[2] r=-1 lpr=73 pi=[57,73)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 235.949295044s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:25 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 73 pg[10.1c( v 56'1095 (0'0,56'1095] local-lis/les=71/72 n=5 ec=57/42 lis/c=71/57 les/c/f=72/58/0 sis=73 pruub=14.959377289s) [2] r=-1 lpr=73 pi=[57,73)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.949295044s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:25 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:25 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Dec 07 09:44:25 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Dec 07 09:44:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:25.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:25.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:25 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88009ec0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:25 compute-1 sshd-session[86967]: Accepted publickey for zuul from 192.168.122.30 port 54402 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:44:25 compute-1 systemd-logind[796]: New session 36 of user zuul.
Dec 07 09:44:25 compute-1 systemd[1]: Started Session 36 of User zuul.
Dec 07 09:44:25 compute-1 sshd-session[86967]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:44:26 compute-1 ceph-mon[80077]: 11.2 scrub starts
Dec 07 09:44:26 compute-1 ceph-mon[80077]: 11.2 scrub ok
Dec 07 09:44:26 compute-1 ceph-mon[80077]: osdmap e73: 3 total, 3 up, 3 in
Dec 07 09:44:26 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:26 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:26 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:26 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 07 09:44:26 compute-1 ceph-mon[80077]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 07 09:44:26 compute-1 ceph-mon[80077]: Deploying daemon keepalived.rgw.default.compute-0.xnnorz on compute-0
Dec 07 09:44:26 compute-1 ceph-mon[80077]: pgmap v89: 337 pgs: 1 active+clean+scrubbing, 4 peering, 332 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:44:26 compute-1 ceph-mon[80077]: 12.5 scrub starts
Dec 07 09:44:26 compute-1 ceph-mon[80077]: 12.5 scrub ok
Dec 07 09:44:26 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Dec 07 09:44:26 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.f scrub starts
Dec 07 09:44:26 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.f scrub ok
Dec 07 09:44:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:26 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:26 compute-1 python3.9[87121]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:44:27 compute-1 ceph-mon[80077]: 9.0 scrub starts
Dec 07 09:44:27 compute-1 ceph-mon[80077]: 9.0 scrub ok
Dec 07 09:44:27 compute-1 ceph-mon[80077]: osdmap e74: 3 total, 3 up, 3 in
Dec 07 09:44:27 compute-1 ceph-mon[80077]: 12.f scrub starts
Dec 07 09:44:27 compute-1 ceph-mon[80077]: 12.f scrub ok
Dec 07 09:44:27 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:27 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:27 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:27 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:27 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:27 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Dec 07 09:44:27 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Dec 07 09:44:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:44:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:27.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:44:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:27.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:27 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:44:28 compute-1 ceph-mon[80077]: 10.c scrub starts
Dec 07 09:44:28 compute-1 ceph-mon[80077]: 10.c scrub ok
Dec 07 09:44:28 compute-1 ceph-mon[80077]: 8.1 scrub starts
Dec 07 09:44:28 compute-1 ceph-mon[80077]: 8.1 scrub ok
Dec 07 09:44:28 compute-1 ceph-mon[80077]: Deploying daemon prometheus.compute-0 on compute-0
Dec 07 09:44:28 compute-1 ceph-mon[80077]: 12.1 scrub starts
Dec 07 09:44:28 compute-1 ceph-mon[80077]: 12.1 scrub ok
Dec 07 09:44:28 compute-1 ceph-mon[80077]: pgmap v91: 337 pgs: 4 peering, 333 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 247 B/s, 2 keys/s, 7 objects/s recovering
Dec 07 09:44:28 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Dec 07 09:44:28 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Dec 07 09:44:28 compute-1 sudo[87334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyunuvvvsslzqcotiuxwqdulyjevcfcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100668.1368773-57-112096170800929/AnsiballZ_command.py'
Dec 07 09:44:28 compute-1 sudo[87334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:44:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:28 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88009ec0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:28 compute-1 python3.9[87336]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:44:29 compute-1 ceph-mon[80077]: 4.f scrub starts
Dec 07 09:44:29 compute-1 ceph-mon[80077]: 10.4 scrub starts
Dec 07 09:44:29 compute-1 ceph-mon[80077]: 4.f scrub ok
Dec 07 09:44:29 compute-1 ceph-mon[80077]: 10.4 scrub ok
Dec 07 09:44:29 compute-1 ceph-mon[80077]: 10.14 scrub starts
Dec 07 09:44:29 compute-1 ceph-mon[80077]: 10.14 scrub ok
Dec 07 09:44:29 compute-1 ceph-mon[80077]: 12.16 scrub starts
Dec 07 09:44:29 compute-1 ceph-mon[80077]: 12.16 scrub ok
Dec 07 09:44:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:29 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:29 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Dec 07 09:44:29 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Dec 07 09:44:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 07 09:44:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:29.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 07 09:44:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:44:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:29.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:44:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:29 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:30 compute-1 ceph-mon[80077]: 11.0 scrub starts
Dec 07 09:44:30 compute-1 ceph-mon[80077]: 11.0 scrub ok
Dec 07 09:44:30 compute-1 ceph-mon[80077]: 9.17 scrub starts
Dec 07 09:44:30 compute-1 ceph-mon[80077]: 9.17 scrub ok
Dec 07 09:44:30 compute-1 ceph-mon[80077]: 12.15 scrub starts
Dec 07 09:44:30 compute-1 ceph-mon[80077]: 12.15 scrub ok
Dec 07 09:44:30 compute-1 ceph-mon[80077]: pgmap v92: 337 pgs: 4 peering, 333 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 185 B/s, 1 keys/s, 5 objects/s recovering
Dec 07 09:44:30 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 07 09:44:30 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 07 09:44:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:30 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:31 compute-1 ceph-mon[80077]: 9.2 scrub starts
Dec 07 09:44:31 compute-1 ceph-mon[80077]: 9.2 scrub ok
Dec 07 09:44:31 compute-1 ceph-mon[80077]: 9.16 scrub starts
Dec 07 09:44:31 compute-1 ceph-mon[80077]: 9.16 scrub ok
Dec 07 09:44:31 compute-1 ceph-mon[80077]: 11.c scrub starts
Dec 07 09:44:31 compute-1 ceph-mon[80077]: 11.c scrub ok
Dec 07 09:44:31 compute-1 ceph-mon[80077]: 11.1b scrub starts
Dec 07 09:44:31 compute-1 ceph-mon[80077]: 11.1b scrub ok
Dec 07 09:44:31 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:31 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c000f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:31 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.1d deep-scrub starts
Dec 07 09:44:31 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.1d deep-scrub ok
Dec 07 09:44:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:31.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:31.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:31 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:32 compute-1 ceph-mon[80077]: 9.7 scrub starts
Dec 07 09:44:32 compute-1 ceph-mon[80077]: 9.7 scrub ok
Dec 07 09:44:32 compute-1 ceph-mon[80077]: pgmap v93: 337 pgs: 337 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 101 B/s, 1 keys/s, 1 objects/s recovering
Dec 07 09:44:32 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 07 09:44:32 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 07 09:44:32 compute-1 ceph-mon[80077]: 11.1d deep-scrub starts
Dec 07 09:44:32 compute-1 ceph-mon[80077]: 11.1d deep-scrub ok
Dec 07 09:44:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Dec 07 09:44:32 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 75 pg[6.5( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=61/61 les/c/f=62/63/0 sis=75) [1] r=0 lpr=75 pi=[61,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:32 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 75 pg[6.d( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=61/61 les/c/f=62/63/0 sis=75) [1] r=0 lpr=75 pi=[61,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:32 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 07 09:44:32 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 07 09:44:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:32 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Dec 07 09:44:32 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 76 pg[6.d( v 46'39 lc 45'13 (0'0,46'39] local-lis/les=75/76 n=2 ec=53/21 lis/c=61/61 les/c/f=62/63/0 sis=75) [1] r=0 lpr=75 pi=[61,75)/1 crt=46'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:32 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 76 pg[6.5( v 46'39 lc 45'11 (0'0,46'39] local-lis/les=75/76 n=2 ec=53/21 lis/c=61/61 les/c/f=62/63/0 sis=75) [1] r=0 lpr=75 pi=[61,75)/1 crt=46'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:44:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:33 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:33 compute-1 ceph-mon[80077]: 11.9 scrub starts
Dec 07 09:44:33 compute-1 ceph-mon[80077]: 11.9 scrub ok
Dec 07 09:44:33 compute-1 ceph-mon[80077]: 11.19 scrub starts
Dec 07 09:44:33 compute-1 ceph-mon[80077]: 11.19 scrub ok
Dec 07 09:44:33 compute-1 ceph-mon[80077]: 9.14 scrub starts
Dec 07 09:44:33 compute-1 ceph-mon[80077]: 9.14 scrub ok
Dec 07 09:44:33 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 07 09:44:33 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 07 09:44:33 compute-1 ceph-mon[80077]: osdmap e75: 3 total, 3 up, 3 in
Dec 07 09:44:33 compute-1 ceph-mon[80077]: 11.1c scrub starts
Dec 07 09:44:33 compute-1 ceph-mon[80077]: 11.1c scrub ok
Dec 07 09:44:33 compute-1 ceph-mon[80077]: osdmap e76: 3 total, 3 up, 3 in
Dec 07 09:44:33 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 07 09:44:33 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 07 09:44:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:33.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:33.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:33 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c000f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:33 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Dec 07 09:44:33 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 77 pg[6.e( v 46'39 (0'0,46'39] local-lis/les=63/64 n=1 ec=53/21 lis/c=63/63 les/c/f=64/64/0 sis=77 pruub=8.987504005s) [0] r=-1 lpr=77 pi=[63,77)/1 crt=46'39 mlcod 46'39 active pruub 238.927001953s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:33 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 77 pg[6.e( v 46'39 (0'0,46'39] local-lis/les=63/64 n=1 ec=53/21 lis/c=63/63 les/c/f=64/64/0 sis=77 pruub=8.987448692s) [0] r=-1 lpr=77 pi=[63,77)/1 crt=46'39 mlcod 0'0 unknown NOTIFY pruub 238.927001953s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:33 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 77 pg[6.6( v 46'39 (0'0,46'39] local-lis/les=63/64 n=2 ec=53/21 lis/c=63/63 les/c/f=64/64/0 sis=77 pruub=8.987331390s) [0] r=-1 lpr=77 pi=[63,77)/1 crt=46'39 mlcod 46'39 active pruub 238.927444458s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:33 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 77 pg[6.6( v 46'39 (0'0,46'39] local-lis/les=63/64 n=2 ec=53/21 lis/c=63/63 les/c/f=64/64/0 sis=77 pruub=8.987160683s) [0] r=-1 lpr=77 pi=[63,77)/1 crt=46'39 mlcod 0'0 unknown NOTIFY pruub 238.927444458s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:33 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1] r=0 lpr=77 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:33 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1] r=0 lpr=77 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:33 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1] r=0 lpr=77 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:33 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=77) [1] r=0 lpr=77 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:34 compute-1 ceph-mon[80077]: 11.d scrub starts
Dec 07 09:44:34 compute-1 ceph-mon[80077]: 11.d scrub ok
Dec 07 09:44:34 compute-1 ceph-mon[80077]: 8.11 scrub starts
Dec 07 09:44:34 compute-1 ceph-mon[80077]: 8.11 scrub ok
Dec 07 09:44:34 compute-1 ceph-mon[80077]: 11.7 scrub starts
Dec 07 09:44:34 compute-1 ceph-mon[80077]: 11.7 scrub ok
Dec 07 09:44:34 compute-1 ceph-mon[80077]: pgmap v96: 337 pgs: 337 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:44:34 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 07 09:44:34 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 07 09:44:34 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 07 09:44:34 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 07 09:44:34 compute-1 ceph-mon[80077]: osdmap e77: 3 total, 3 up, 3 in
Dec 07 09:44:34 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 07 09:44:34 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 07 09:44:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:34 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Dec 07 09:44:35 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 78 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:35 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 78 pg[10.16( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:35 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 78 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:35 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 78 pg[10.e( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:35 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 78 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:35 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 78 pg[10.6( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:35 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 78 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:35 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 78 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:35 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:35 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 07 09:44:35 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 07 09:44:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:44:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:35.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:44:35 compute-1 ceph-mon[80077]: 8.5 scrub starts
Dec 07 09:44:35 compute-1 ceph-mon[80077]: 8.5 scrub ok
Dec 07 09:44:35 compute-1 ceph-mon[80077]: 8.4 scrub starts
Dec 07 09:44:35 compute-1 ceph-mon[80077]: 8.4 scrub ok
Dec 07 09:44:35 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:35 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:35 compute-1 ceph-mon[80077]: osdmap e78: 3 total, 3 up, 3 in
Dec 07 09:44:35 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:35 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec 07 09:44:35 compute-1 ceph-mon[80077]: 6.6 scrub starts
Dec 07 09:44:35 compute-1 ceph-mon[80077]: 6.6 scrub ok
Dec 07 09:44:35 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 07 09:44:35 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 07 09:44:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:44:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:35.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:44:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:35 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:36 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Dec 07 09:44:36 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 79 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=79) [1] r=0 lpr=79 pi=[64,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:36 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 79 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=65/65 les/c/f=66/66/0 sis=79) [1] r=0 lpr=79 pi=[65,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:36 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 79 pg[10.7( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=79) [1] r=0 lpr=79 pi=[64,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:36 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 79 pg[10.17( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=79) [1] r=0 lpr=79 pi=[64,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:36 compute-1 sshd-session[83146]: Connection closed by 192.168.122.100 port 42436
Dec 07 09:44:36 compute-1 sshd-session[83127]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 07 09:44:36 compute-1 systemd[1]: session-34.scope: Deactivated successfully.
Dec 07 09:44:36 compute-1 systemd[1]: session-34.scope: Consumed 20.392s CPU time.
Dec 07 09:44:36 compute-1 systemd-logind[796]: Session 34 logged out. Waiting for processes to exit.
Dec 07 09:44:36 compute-1 systemd-logind[796]: Removed session 34.
Dec 07 09:44:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: ignoring --setuser ceph since I am not root
Dec 07 09:44:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: ignoring --setgroup ceph since I am not root
Dec 07 09:44:36 compute-1 ceph-mgr[80383]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 07 09:44:36 compute-1 ceph-mgr[80383]: pidfile_write: ignore empty --pid-file
Dec 07 09:44:36 compute-1 sudo[87334]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:36 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'alerts'
Dec 07 09:44:36 compute-1 ceph-mgr[80383]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 07 09:44:36 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'balancer'
Dec 07 09:44:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:36.333+0000 7f789b18c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 07 09:44:36 compute-1 ceph-mgr[80383]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 07 09:44:36 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'cephadm'
Dec 07 09:44:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:36.410+0000 7f789b18c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 07 09:44:36 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 07 09:44:36 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 07 09:44:36 compute-1 sshd-session[86970]: Connection closed by 192.168.122.30 port 54402
Dec 07 09:44:36 compute-1 sshd-session[86967]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:44:36 compute-1 systemd[1]: session-36.scope: Deactivated successfully.
Dec 07 09:44:36 compute-1 systemd[1]: session-36.scope: Consumed 8.878s CPU time.
Dec 07 09:44:36 compute-1 systemd-logind[796]: Session 36 logged out. Waiting for processes to exit.
Dec 07 09:44:36 compute-1 systemd-logind[796]: Removed session 36.
Dec 07 09:44:36 compute-1 ceph-mon[80077]: 12.17 scrub starts
Dec 07 09:44:36 compute-1 ceph-mon[80077]: 12.17 scrub ok
Dec 07 09:44:36 compute-1 ceph-mon[80077]: 11.1e scrub starts
Dec 07 09:44:36 compute-1 ceph-mon[80077]: pgmap v99: 337 pgs: 337 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:44:36 compute-1 ceph-mon[80077]: 11.1e scrub ok
Dec 07 09:44:36 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:36 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 07 09:44:36 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 07 09:44:36 compute-1 ceph-mon[80077]: osdmap e79: 3 total, 3 up, 3 in
Dec 07 09:44:36 compute-1 ceph-mon[80077]: from='mgr.14457 192.168.122.100:0/642519861' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec 07 09:44:36 compute-1 ceph-mon[80077]: mgrmap e29: compute-0.dotugk(active, since 106s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:44:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:36 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c000f30 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.17( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[64,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.17( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[64,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.7( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[64,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.7( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[64,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[64,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 80 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[64,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:37 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'crash'
Dec 07 09:44:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:37 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:37 compute-1 ceph-mgr[80383]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 07 09:44:37 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'dashboard'
Dec 07 09:44:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:37.261+0000 7f789b18c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 07 09:44:37 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 07 09:44:37 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 07 09:44:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:44:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:37.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:44:37 compute-1 ceph-mon[80077]: 8.2 scrub starts
Dec 07 09:44:37 compute-1 ceph-mon[80077]: 8.2 scrub ok
Dec 07 09:44:37 compute-1 ceph-mon[80077]: 11.4 scrub starts
Dec 07 09:44:37 compute-1 ceph-mon[80077]: 11.4 scrub ok
Dec 07 09:44:37 compute-1 ceph-mon[80077]: osdmap e80: 3 total, 3 up, 3 in
Dec 07 09:44:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:44:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:37.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:44:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:37 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64004140 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:37 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'devicehealth'
Dec 07 09:44:37 compute-1 ceph-mgr[80383]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 07 09:44:37 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'diskprediction_local'
Dec 07 09:44:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:37.938+0000 7f789b18c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 07 09:44:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:44:38 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Dec 07 09:44:38 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 81 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=80/81 n=5 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:38 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 81 pg[10.e( v 56'1095 (0'0,56'1095] local-lis/les=80/81 n=6 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:38 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 81 pg[10.16( v 56'1095 (0'0,56'1095] local-lis/les=80/81 n=5 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:38 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 81 pg[10.6( v 56'1095 (0'0,56'1095] local-lis/les=80/81 n=6 ec=57/42 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 07 09:44:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 07 09:44:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]:   from numpy import show_config as show_numpy_config
Dec 07 09:44:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:38.103+0000 7f789b18c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 07 09:44:38 compute-1 ceph-mgr[80383]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 07 09:44:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'influx'
Dec 07 09:44:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:38.178+0000 7f789b18c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 07 09:44:38 compute-1 ceph-mgr[80383]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 07 09:44:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'insights'
Dec 07 09:44:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'iostat'
Dec 07 09:44:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:38.369+0000 7f789b18c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 07 09:44:38 compute-1 ceph-mgr[80383]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 07 09:44:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'k8sevents'
Dec 07 09:44:38 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 07 09:44:38 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 07 09:44:38 compute-1 ceph-mon[80077]: 12.11 scrub starts
Dec 07 09:44:38 compute-1 ceph-mon[80077]: 12.11 scrub ok
Dec 07 09:44:38 compute-1 ceph-mon[80077]: 11.5 scrub starts
Dec 07 09:44:38 compute-1 ceph-mon[80077]: 11.5 scrub ok
Dec 07 09:44:38 compute-1 ceph-mon[80077]: osdmap e81: 3 total, 3 up, 3 in
Dec 07 09:44:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:38 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'localpool'
Dec 07 09:44:38 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'mds_autoscaler'
Dec 07 09:44:39 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Dec 07 09:44:39 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 82 pg[10.17( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:39 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 82 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:39 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 82 pg[10.7( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:39 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 82 pg[10.17( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:39 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 82 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:39 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 82 pg[10.7( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:39 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 82 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:39 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 82 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'mirroring'
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'nfs'
Dec 07 09:44:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:39 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c000f30 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:39.412+0000 7f789b18c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'orchestrator'
Dec 07 09:44:39 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 07 09:44:39 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 07 09:44:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:39.630+0000 7f789b18c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'osd_perf_query'
Dec 07 09:44:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:39.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:39 compute-1 ceph-mon[80077]: 9.11 scrub starts
Dec 07 09:44:39 compute-1 ceph-mon[80077]: 9.11 scrub ok
Dec 07 09:44:39 compute-1 ceph-mon[80077]: osdmap e82: 3 total, 3 up, 3 in
Dec 07 09:44:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:39.716+0000 7f789b18c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'osd_support'
Dec 07 09:44:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:39.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:39.786+0000 7f789b18c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'pg_autoscaler'
Dec 07 09:44:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:39 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:39.873+0000 7f789b18c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'progress'
Dec 07 09:44:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:39.955+0000 7f789b18c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 07 09:44:39 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'prometheus'
Dec 07 09:44:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Dec 07 09:44:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 83 pg[10.17( v 56'1095 (0'0,56'1095] local-lis/les=82/83 n=5 ec=57/42 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 83 pg[10.7( v 56'1095 (0'0,56'1095] local-lis/les=82/83 n=6 ec=57/42 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 83 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=82/83 n=5 ec=57/42 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:40 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 83 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=82/83 n=6 ec=57/42 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:40.309+0000 7f789b18c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 07 09:44:40 compute-1 ceph-mgr[80383]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 07 09:44:40 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rbd_support'
Dec 07 09:44:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:40.425+0000 7f789b18c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 07 09:44:40 compute-1 ceph-mgr[80383]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 07 09:44:40 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'restful'
Dec 07 09:44:40 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 07 09:44:40 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 07 09:44:40 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rgw'
Dec 07 09:44:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:40 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80001ce0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:40.906+0000 7f789b18c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 07 09:44:40 compute-1 ceph-mgr[80383]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 07 09:44:40 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'rook'
Dec 07 09:44:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:41 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:41 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 07 09:44:41 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 07 09:44:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:41.536+0000 7f789b18c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 07 09:44:41 compute-1 ceph-mgr[80383]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 07 09:44:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'selftest'
Dec 07 09:44:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:41.614+0000 7f789b18c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 07 09:44:41 compute-1 ceph-mgr[80383]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 07 09:44:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'snap_schedule'
Dec 07 09:44:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:41.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:41.700+0000 7f789b18c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 07 09:44:41 compute-1 ceph-mgr[80383]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 07 09:44:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'stats'
Dec 07 09:44:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:41.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'status'
Dec 07 09:44:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:41 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c002e70 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:41.863+0000 7f789b18c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 07 09:44:41 compute-1 ceph-mgr[80383]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 07 09:44:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'telegraf'
Dec 07 09:44:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:41.940+0000 7f789b18c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 07 09:44:41 compute-1 ceph-mgr[80383]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 07 09:44:41 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'telemetry'
Dec 07 09:44:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:42.106+0000 7f789b18c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'test_orchestrator'
Dec 07 09:44:42 compute-1 ceph-mon[80077]: 4.5 scrub starts
Dec 07 09:44:42 compute-1 ceph-mon[80077]: 4.5 scrub ok
Dec 07 09:44:42 compute-1 ceph-mon[80077]: osdmap e83: 3 total, 3 up, 3 in
Dec 07 09:44:42 compute-1 ceph-mon[80077]: 8.e scrub starts
Dec 07 09:44:42 compute-1 ceph-mon[80077]: 8.e scrub ok
Dec 07 09:44:42 compute-1 ceph-mon[80077]: 8.8 scrub starts
Dec 07 09:44:42 compute-1 ceph-mon[80077]: 8.8 scrub ok
Dec 07 09:44:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Dec 07 09:44:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:42.348+0000 7f789b18c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'volumes'
Dec 07 09:44:42 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 07 09:44:42 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 07 09:44:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:42.643+0000 7f789b18c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: mgr[py] Loading python module 'zabbix'
Dec 07 09:44:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 2025-12-07T09:44:42.721+0000 7f789b18c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: ms_deliver_dispatch: unhandled message 0x56122b545a00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: mgr load Constructed class from module: dashboard
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: mgr load Constructed class from module: prometheus
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: [dashboard INFO root] Starting engine...
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: [prometheus INFO root] server_addr: :: server_port: 9283
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: [prometheus INFO root] Starting engine...
Dec 07 09:44:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: [07/Dec/2025:09:44:42] ENGINE Bus STARTING
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: [prometheus INFO cherrypy.error] [07/Dec/2025:09:44:42] ENGINE Bus STARTING
Dec 07 09:44:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: CherryPy Checker:
Dec 07 09:44:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: The Application mounted at '' has an empty config.
Dec 07 09:44:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: 
Dec 07 09:44:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:42 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:42 compute-1 sshd-session[87433]: Accepted publickey for ceph-admin from 192.168.122.100 port 37528 ssh2: RSA SHA256:6l98s8c6wPNzPo5MkgJlyQXSDM1JtoqRSeqTJW3rr9A
Dec 07 09:44:42 compute-1 systemd-logind[796]: New session 37 of user ceph-admin.
Dec 07 09:44:42 compute-1 systemd[1]: Started Session 37 of User ceph-admin.
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: [dashboard INFO root] Engine started...
Dec 07 09:44:42 compute-1 sshd-session[87433]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 07 09:44:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: [07/Dec/2025:09:44:42] ENGINE Serving on http://:::9283
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: [prometheus INFO cherrypy.error] [07/Dec/2025:09:44:42] ENGINE Serving on http://:::9283
Dec 07 09:44:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-mgr-compute-1-buauyv[80379]: [07/Dec/2025:09:44:42] ENGINE Bus STARTED
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: [prometheus INFO cherrypy.error] [07/Dec/2025:09:44:42] ENGINE Bus STARTED
Dec 07 09:44:42 compute-1 ceph-mgr[80383]: [prometheus INFO root] Engine started.
Dec 07 09:44:42 compute-1 sudo[87461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:44:42 compute-1 sudo[87461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:42 compute-1 sudo[87461]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:44:43 compute-1 sudo[87486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 07 09:44:43 compute-1 sudo[87486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:43 compute-1 ceph-mon[80077]: 9.c scrub starts
Dec 07 09:44:43 compute-1 ceph-mon[80077]: 9.c scrub ok
Dec 07 09:44:43 compute-1 ceph-mon[80077]: 9.e scrub starts
Dec 07 09:44:43 compute-1 ceph-mon[80077]: 9.e scrub ok
Dec 07 09:44:43 compute-1 ceph-mon[80077]: 4.0 deep-scrub starts
Dec 07 09:44:43 compute-1 ceph-mon[80077]: 4.0 deep-scrub ok
Dec 07 09:44:43 compute-1 ceph-mon[80077]: Standby manager daemon compute-2.ntknug restarted
Dec 07 09:44:43 compute-1 ceph-mon[80077]: Standby manager daemon compute-2.ntknug started
Dec 07 09:44:43 compute-1 ceph-mon[80077]: Active manager daemon compute-0.dotugk restarted
Dec 07 09:44:43 compute-1 ceph-mon[80077]: Activating manager daemon compute-0.dotugk
Dec 07 09:44:43 compute-1 ceph-mon[80077]: osdmap e84: 3 total, 3 up, 3 in
Dec 07 09:44:43 compute-1 ceph-mon[80077]: mgrmap e30: compute-0.dotugk(active, starting, since 0.0326677s), standbys: compute-1.buauyv, compute-2.ntknug
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.qgzqbk"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.rxtsyx"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ihigcc"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dotugk", "id": "compute-0.dotugk"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.buauyv", "id": "compute-1.buauyv"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ntknug", "id": "compute-2.ntknug"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: Manager daemon compute-0.dotugk is now available
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/mirror_snapshot_schedule"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: 11.1 scrub starts
Dec 07 09:44:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dotugk/trash_purge_schedule"}]: dispatch
Dec 07 09:44:43 compute-1 ceph-mon[80077]: 11.1 scrub ok
Dec 07 09:44:43 compute-1 ceph-mon[80077]: Standby manager daemon compute-1.buauyv restarted
Dec 07 09:44:43 compute-1 ceph-mon[80077]: Standby manager daemon compute-1.buauyv started
Dec 07 09:44:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:43 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80001e60 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:43 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 07 09:44:43 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 07 09:44:43 compute-1 podman[87582]: 2025-12-07 09:44:43.634379085 +0000 UTC m=+0.076522885 container exec 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 07 09:44:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:44:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:43.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:44:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:43.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:43 compute-1 podman[87582]: 2025-12-07 09:44:43.770231068 +0000 UTC m=+0.212374848 container exec_died 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 07 09:44:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:43 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:44 compute-1 ceph-mon[80077]: 4.7 scrub starts
Dec 07 09:44:44 compute-1 ceph-mon[80077]: 4.7 scrub ok
Dec 07 09:44:44 compute-1 ceph-mon[80077]: mgrmap e31: compute-0.dotugk(active, since 1.05586s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:44:44 compute-1 ceph-mon[80077]: 8.17 scrub starts
Dec 07 09:44:44 compute-1 ceph-mon[80077]: 8.17 scrub ok
Dec 07 09:44:44 compute-1 podman[87699]: 2025-12-07 09:44:44.317867607 +0000 UTC m=+0.076343221 container exec 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 09:44:44 compute-1 podman[87699]: 2025-12-07 09:44:44.355473301 +0000 UTC m=+0.113948905 container exec_died 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 09:44:44 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 07 09:44:44 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 07 09:44:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:44 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c002e70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:45 compute-1 podman[87790]: 2025-12-07 09:44:45.100325378 +0000 UTC m=+0.418641111 container exec 88b1575e7bf6a1c4a6a2738ad8a5b427833ca1d8abecd12798715ede0a232df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 09:44:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Dec 07 09:44:45 compute-1 podman[87811]: 2025-12-07 09:44:45.21385287 +0000 UTC m=+0.081413047 container exec_died 88b1575e7bf6a1c4a6a2738ad8a5b427833ca1d8abecd12798715ede0a232df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:44:45 compute-1 podman[87790]: 2025-12-07 09:44:45.221063024 +0000 UTC m=+0.539378717 container exec_died 88b1575e7bf6a1c4a6a2738ad8a5b427833ca1d8abecd12798715ede0a232df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 07 09:44:45 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 85 pg[6.8( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=85) [1] r=0 lpr=85 pi=[53,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:45 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 85 pg[10.18( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=85 pruub=10.131623268s) [0] r=-1 lpr=85 pi=[57,85)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 251.312683105s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:45 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 85 pg[10.18( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=85 pruub=10.131457329s) [0] r=-1 lpr=85 pi=[57,85)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 251.312683105s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:45 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 85 pg[10.8( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=85 pruub=10.130810738s) [0] r=-1 lpr=85 pi=[57,85)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 251.312469482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:45 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 85 pg[10.8( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=85 pruub=10.130788803s) [0] r=-1 lpr=85 pi=[57,85)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 251.312469482s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:45 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:45 compute-1 ceph-mon[80077]: [07/Dec/2025:09:44:43] ENGINE Bus STARTING
Dec 07 09:44:45 compute-1 ceph-mon[80077]: [07/Dec/2025:09:44:43] ENGINE Serving on http://192.168.122.100:8765
Dec 07 09:44:45 compute-1 ceph-mon[80077]: [07/Dec/2025:09:44:44] ENGINE Serving on https://192.168.122.100:7150
Dec 07 09:44:45 compute-1 ceph-mon[80077]: [07/Dec/2025:09:44:44] ENGINE Bus STARTED
Dec 07 09:44:45 compute-1 ceph-mon[80077]: [07/Dec/2025:09:44:44] ENGINE Client ('192.168.122.100', 35992) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 07 09:44:45 compute-1 ceph-mon[80077]: 4.1f scrub starts
Dec 07 09:44:45 compute-1 ceph-mon[80077]: 4.1f scrub ok
Dec 07 09:44:45 compute-1 ceph-mon[80077]: 8.0 scrub starts
Dec 07 09:44:45 compute-1 ceph-mon[80077]: 8.0 scrub ok
Dec 07 09:44:45 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 07 09:44:45 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 07 09:44:45 compute-1 ceph-mon[80077]: 4.1b scrub starts
Dec 07 09:44:45 compute-1 ceph-mon[80077]: 4.1b scrub ok
Dec 07 09:44:45 compute-1 podman[87856]: 2025-12-07 09:44:45.438867978 +0000 UTC m=+0.054103020 container exec beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 09:44:45 compute-1 podman[87856]: 2025-12-07 09:44:45.454001845 +0000 UTC m=+0.069236897 container exec_died beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 09:44:45 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Dec 07 09:44:45 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Dec 07 09:44:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:45.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:45 compute-1 podman[87923]: 2025-12-07 09:44:45.730587964 +0000 UTC m=+0.070097601 container exec 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 07 09:44:45 compute-1 podman[87923]: 2025-12-07 09:44:45.752981578 +0000 UTC m=+0.092491185 container exec_died 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, release=1793, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc.)
Dec 07 09:44:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:45.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:45 compute-1 sudo[87486]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:45 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:46 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.f deep-scrub starts
Dec 07 09:44:46 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.f deep-scrub ok
Dec 07 09:44:46 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Dec 07 09:44:46 compute-1 ceph-mon[80077]: pgmap v4: 337 pgs: 337 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:44:46 compute-1 ceph-mon[80077]: 9.1 scrub starts
Dec 07 09:44:46 compute-1 ceph-mon[80077]: 9.1 scrub ok
Dec 07 09:44:46 compute-1 ceph-mon[80077]: 11.17 scrub starts
Dec 07 09:44:46 compute-1 ceph-mon[80077]: 11.17 scrub ok
Dec 07 09:44:46 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 07 09:44:46 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 07 09:44:46 compute-1 ceph-mon[80077]: osdmap e85: 3 total, 3 up, 3 in
Dec 07 09:44:46 compute-1 ceph-mon[80077]: mgrmap e32: compute-0.dotugk(active, since 3s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:44:46 compute-1 ceph-mon[80077]: 9.d deep-scrub starts
Dec 07 09:44:46 compute-1 ceph-mon[80077]: 9.d deep-scrub ok
Dec 07 09:44:46 compute-1 ceph-mon[80077]: 11.1f scrub starts
Dec 07 09:44:46 compute-1 ceph-mon[80077]: 11.1f scrub ok
Dec 07 09:44:46 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:46 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 86 pg[10.8( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=86) [0]/[1] r=0 lpr=86 pi=[57,86)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:46 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 86 pg[10.8( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=86) [0]/[1] r=0 lpr=86 pi=[57,86)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:46 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 86 pg[10.18( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=86) [0]/[1] r=0 lpr=86 pi=[57,86)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:46 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 86 pg[10.18( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=86) [0]/[1] r=0 lpr=86 pi=[57,86)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:46 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 86 pg[6.8( v 46'39 (0'0,46'39] local-lis/les=85/86 n=1 ec=53/21 lis/c=53/53 les/c/f=54/54/0 sis=85) [1] r=0 lpr=85 pi=[53,85)/1 crt=46'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:46 compute-1 sudo[87955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:44:46 compute-1 sudo[87955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:46 compute-1 sudo[87955]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:46 compute-1 sudo[87980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:44:46 compute-1 sudo[87980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:46 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:47 compute-1 sudo[87980]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:47 compute-1 sudo[88036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:44:47 compute-1 sudo[88036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:47 compute-1 sudo[88036]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:47 compute-1 sudo[88061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 07 09:44:47 compute-1 sudo[88061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:47 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c002e70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:47 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 07 09:44:47 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 07 09:44:47 compute-1 ceph-mon[80077]: 11.16 deep-scrub starts
Dec 07 09:44:47 compute-1 ceph-mon[80077]: 11.16 deep-scrub ok
Dec 07 09:44:47 compute-1 ceph-mon[80077]: pgmap v6: 337 pgs: 337 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:44:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 07 09:44:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 07 09:44:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:47 compute-1 ceph-mon[80077]: 11.f deep-scrub starts
Dec 07 09:44:47 compute-1 ceph-mon[80077]: 11.f deep-scrub ok
Dec 07 09:44:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:47 compute-1 ceph-mon[80077]: osdmap e86: 3 total, 3 up, 3 in
Dec 07 09:44:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:47 compute-1 ceph-mon[80077]: 4.10 scrub starts
Dec 07 09:44:47 compute-1 ceph-mon[80077]: 4.10 scrub ok
Dec 07 09:44:47 compute-1 ceph-mon[80077]: mgrmap e33: compute-0.dotugk(active, since 5s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:44:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Dec 07 09:44:47 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 87 pg[10.9( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=87) [1] r=0 lpr=87 pi=[64,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:47 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 87 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=87) [1] r=0 lpr=87 pi=[64,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:47 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 87 pg[10.8( v 56'1095 (0'0,56'1095] local-lis/les=86/87 n=6 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=86) [0]/[1] async=[0] r=0 lpr=86 pi=[57,86)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:47 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 87 pg[10.18( v 56'1095 (0'0,56'1095] local-lis/les=86/87 n=5 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=86) [0]/[1] async=[0] r=0 lpr=86 pi=[57,86)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:47 compute-1 sudo[88061]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:44:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:47.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:44:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:47 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80002780 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:44:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:48.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:44:48 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:44:48 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Dec 07 09:44:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 88 pg[10.18( v 56'1095 (0'0,56'1095] local-lis/les=86/87 n=5 ec=57/42 lis/c=86/57 les/c/f=87/58/0 sis=88 pruub=15.272357941s) [0] async=[0] r=-1 lpr=88 pi=[57,88)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 259.466857910s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 88 pg[10.8( v 56'1095 (0'0,56'1095] local-lis/les=86/87 n=6 ec=57/42 lis/c=86/57 les/c/f=87/58/0 sis=88 pruub=15.272123337s) [0] async=[0] r=-1 lpr=88 pi=[57,88)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 259.466857910s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 88 pg[10.9( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[64,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 88 pg[10.9( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[64,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 88 pg[10.18( v 56'1095 (0'0,56'1095] local-lis/les=86/87 n=5 ec=57/42 lis/c=86/57 les/c/f=87/58/0 sis=88 pruub=15.271576881s) [0] r=-1 lpr=88 pi=[57,88)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.466857910s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 88 pg[10.8( v 56'1095 (0'0,56'1095] local-lis/les=86/87 n=6 ec=57/42 lis/c=86/57 les/c/f=87/58/0 sis=88 pruub=15.271382332s) [0] r=-1 lpr=88 pi=[57,88)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.466857910s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 88 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[64,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:48 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 88 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[64,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:48 compute-1 sudo[88106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 07 09:44:48 compute-1 sudo[88106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:48 compute-1 sudo[88106]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:48 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 07 09:44:48 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 07 09:44:48 compute-1 sudo[88131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph
Dec 07 09:44:48 compute-1 sudo[88131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:48 compute-1 sudo[88131]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:48 compute-1 sudo[88156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:44:48 compute-1 sudo[88156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:48 compute-1 sudo[88156]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:48 compute-1 ceph-mon[80077]: 8.1c scrub starts
Dec 07 09:44:48 compute-1 ceph-mon[80077]: 8.1c scrub ok
Dec 07 09:44:48 compute-1 ceph-mon[80077]: 8.19 scrub starts
Dec 07 09:44:48 compute-1 ceph-mon[80077]: 8.19 scrub ok
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 07 09:44:48 compute-1 ceph-mon[80077]: osdmap e87: 3 total, 3 up, 3 in
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 07 09:44:48 compute-1 ceph-mon[80077]: 11.10 scrub starts
Dec 07 09:44:48 compute-1 ceph-mon[80077]: 11.10 scrub ok
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:48 compute-1 ceph-mon[80077]: osdmap e88: 3 total, 3 up, 3 in
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 07 09:44:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 07 09:44:48 compute-1 sudo[88181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:44:48 compute-1 sudo[88181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:48 compute-1 sudo[88181]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:48 compute-1 sudo[88206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:44:48 compute-1 sudo[88206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:48 compute-1 sudo[88206]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:48 compute-1 sudo[88254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:44:48 compute-1 sudo[88254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:48 compute-1 sudo[88254]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:48 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:48 compute-1 sudo[88279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new
Dec 07 09:44:48 compute-1 sudo[88279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:48 compute-1 sudo[88279]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:48 compute-1 sudo[88304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 07 09:44:48 compute-1 sudo[88304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:48 compute-1 sudo[88304]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:48 compute-1 sudo[88329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:44:48 compute-1 sudo[88329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:48 compute-1 sudo[88329]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:48 compute-1 sudo[88354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:44:48 compute-1 sudo[88354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:48 compute-1 sudo[88354]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 sudo[88379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:44:49 compute-1 sudo[88379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88379]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 sudo[88404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:44:49 compute-1 sudo[88404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88404]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 sudo[88429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:44:49 compute-1 sudo[88429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88429]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:49 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:49 compute-1 sudo[88477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:44:49 compute-1 sudo[88477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88477]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 sudo[88502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new
Dec 07 09:44:49 compute-1 sudo[88502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88502]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Dec 07 09:44:49 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 89 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=89) [1] r=0 lpr=89 pi=[66,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:49 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 89 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=89) [1] r=0 lpr=89 pi=[66,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:49 compute-1 sudo[88527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf.new /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:44:49 compute-1 sudo[88527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88527]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 sudo[88552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 07 09:44:49 compute-1 sudo[88552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88552]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 sudo[88577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph
Dec 07 09:44:49 compute-1 sudo[88577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 ceph-mon[80077]: 9.9 scrub starts
Dec 07 09:44:49 compute-1 ceph-mon[80077]: 9.9 scrub ok
Dec 07 09:44:49 compute-1 ceph-mon[80077]: Updating compute-0:/etc/ceph/ceph.conf
Dec 07 09:44:49 compute-1 ceph-mon[80077]: Updating compute-1:/etc/ceph/ceph.conf
Dec 07 09:44:49 compute-1 ceph-mon[80077]: Updating compute-2:/etc/ceph/ceph.conf
Dec 07 09:44:49 compute-1 ceph-mon[80077]: pgmap v10: 337 pgs: 337 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:44:49 compute-1 ceph-mon[80077]: 9.a scrub starts
Dec 07 09:44:49 compute-1 ceph-mon[80077]: 9.a scrub ok
Dec 07 09:44:49 compute-1 ceph-mon[80077]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:44:49 compute-1 ceph-mon[80077]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:44:49 compute-1 ceph-mon[80077]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.conf
Dec 07 09:44:49 compute-1 ceph-mon[80077]: 8.13 scrub starts
Dec 07 09:44:49 compute-1 ceph-mon[80077]: 8.13 scrub ok
Dec 07 09:44:49 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 07 09:44:49 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 07 09:44:49 compute-1 ceph-mon[80077]: osdmap e89: 3 total, 3 up, 3 in
Dec 07 09:44:49 compute-1 sudo[88577]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 sudo[88602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:44:49 compute-1 sudo[88602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88602]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 sudo[88627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:44:49 compute-1 sudo[88627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88627]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:49.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:49 compute-1 sudo[88652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:44:49 compute-1 sudo[88652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88652]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 sudo[88700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:44:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:49 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:49 compute-1 sudo[88700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88700]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 sudo[88725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new
Dec 07 09:44:49 compute-1 sudo[88725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88725]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:49 compute-1 sudo[88750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 07 09:44:49 compute-1 sudo[88750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:49 compute-1 sudo[88750]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:50 compute-1 sudo[88775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:44:50 compute-1 sudo[88775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:50 compute-1 sudo[88775]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:50 compute-1 sudo[88800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config
Dec 07 09:44:50 compute-1 sudo[88800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:50 compute-1 sudo[88800]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:50 compute-1 sudo[88826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:44:50 compute-1 sudo[88826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:50 compute-1 sudo[88826]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:44:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:50.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:44:50 compute-1 sudo[88851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c
Dec 07 09:44:50 compute-1 sudo[88851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:50 compute-1 sudo[88851]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:50 compute-1 sudo[88876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:44:50 compute-1 sudo[88876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:50 compute-1 sudo[88876]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:50 compute-1 sudo[88924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:44:50 compute-1 sudo[88924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:50 compute-1 sudo[88924]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:50 compute-1 sudo[88949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new
Dec 07 09:44:50 compute-1 sudo[88949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:50 compute-1 sudo[88949]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:50 compute-1 sudo[88974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-75f4c9fd-539a-5e17-b55a-0a12a4e2736c/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring.new /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:44:50 compute-1 sudo[88974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:50 compute-1 sudo[88974]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Dec 07 09:44:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:50 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80002780 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:50 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 90 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[66,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:50 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 90 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=88/64 les/c/f=89/65/0 sis=90) [1] r=0 lpr=90 pi=[64,90)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:50 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 90 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=88/64 les/c/f=89/65/0 sis=90) [1] r=0 lpr=90 pi=[64,90)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:50 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 90 pg[10.a( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[66,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:50 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 90 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[66,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:50 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 90 pg[10.1a( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[66,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:44:50 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 90 pg[10.9( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=88/64 les/c/f=89/65/0 sis=90) [1] r=0 lpr=90 pi=[64,90)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:50 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 90 pg[10.9( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=88/64 les/c/f=89/65/0 sis=90) [1] r=0 lpr=90 pi=[64,90)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:50 compute-1 ceph-mon[80077]: 7.14 deep-scrub starts
Dec 07 09:44:50 compute-1 ceph-mon[80077]: 7.14 deep-scrub ok
Dec 07 09:44:50 compute-1 ceph-mon[80077]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 07 09:44:50 compute-1 ceph-mon[80077]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 07 09:44:50 compute-1 ceph-mon[80077]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 07 09:44:50 compute-1 ceph-mon[80077]: Updating compute-1:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:44:50 compute-1 ceph-mon[80077]: Updating compute-0:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:44:50 compute-1 ceph-mon[80077]: 4.1e scrub starts
Dec 07 09:44:50 compute-1 ceph-mon[80077]: 4.1e scrub ok
Dec 07 09:44:50 compute-1 ceph-mon[80077]: Updating compute-2:/var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/config/ceph.client.admin.keyring
Dec 07 09:44:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:51 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:51.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Dec 07 09:44:51 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 91 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=90/91 n=5 ec=57/42 lis/c=88/64 les/c/f=89/65/0 sis=90) [1] r=0 lpr=90 pi=[64,90)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:51 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 91 pg[10.9( v 56'1095 (0'0,56'1095] local-lis/les=90/91 n=6 ec=57/42 lis/c=88/64 les/c/f=89/65/0 sis=90) [1] r=0 lpr=90 pi=[64,90)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:51 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:51 compute-1 ceph-mon[80077]: 9.1d scrub starts
Dec 07 09:44:51 compute-1 ceph-mon[80077]: 9.1d scrub ok
Dec 07 09:44:51 compute-1 ceph-mon[80077]: pgmap v12: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 13 op/s; 54 B/s, 2 objects/s recovering
Dec 07 09:44:51 compute-1 ceph-mon[80077]: osdmap e90: 3 total, 3 up, 3 in
Dec 07 09:44:51 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:51 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:51 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:51 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:51 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:51 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:51 compute-1 ceph-mon[80077]: 11.11 scrub starts
Dec 07 09:44:51 compute-1 ceph-mon[80077]: 11.11 scrub ok
Dec 07 09:44:51 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:51 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:51 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:44:51 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:44:51 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:44:51 compute-1 ceph-mon[80077]: osdmap e91: 3 total, 3 up, 3 in
Dec 07 09:44:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:52.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:52 compute-1 sshd-session[89000]: Accepted publickey for zuul from 192.168.122.30 port 43842 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:44:52 compute-1 systemd-logind[796]: New session 38 of user zuul.
Dec 07 09:44:52 compute-1 systemd[1]: Started Session 38 of User zuul.
Dec 07 09:44:52 compute-1 sshd-session[89000]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:44:52 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 07 09:44:52 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 07 09:44:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:52 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Dec 07 09:44:52 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 92 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:52 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 92 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:52 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 92 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:44:52 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 92 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:44:52 compute-1 ceph-mon[80077]: 8.15 scrub starts
Dec 07 09:44:52 compute-1 ceph-mon[80077]: 8.15 scrub ok
Dec 07 09:44:52 compute-1 ceph-mon[80077]: 5.1d scrub starts
Dec 07 09:44:52 compute-1 ceph-mon[80077]: 5.1d scrub ok
Dec 07 09:44:52 compute-1 ceph-mon[80077]: 10.19 scrub starts
Dec 07 09:44:52 compute-1 ceph-mon[80077]: 10.19 scrub ok
Dec 07 09:44:52 compute-1 ceph-mon[80077]: osdmap e92: 3 total, 3 up, 3 in
Dec 07 09:44:53 compute-1 python3.9[89153]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 07 09:44:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:44:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:53 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80003490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:53 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 07 09:44:53 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 07 09:44:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:53.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Dec 07 09:44:53 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 93 pg[10.a( v 56'1095 (0'0,56'1095] local-lis/les=92/93 n=6 ec=57/42 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:53 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 93 pg[10.1a( v 56'1095 (0'0,56'1095] local-lis/les=92/93 n=5 ec=57/42 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:44:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:53 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:53 compute-1 ceph-mon[80077]: pgmap v15: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 13 op/s; 54 B/s, 2 objects/s recovering
Dec 07 09:44:53 compute-1 ceph-mon[80077]: 8.3 deep-scrub starts
Dec 07 09:44:53 compute-1 ceph-mon[80077]: 8.3 deep-scrub ok
Dec 07 09:44:53 compute-1 ceph-mon[80077]: 7.18 scrub starts
Dec 07 09:44:53 compute-1 ceph-mon[80077]: 7.18 scrub ok
Dec 07 09:44:53 compute-1 ceph-mon[80077]: 10.9 scrub starts
Dec 07 09:44:53 compute-1 ceph-mon[80077]: 10.9 scrub ok
Dec 07 09:44:53 compute-1 ceph-mon[80077]: osdmap e93: 3 total, 3 up, 3 in
Dec 07 09:44:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:54.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:54 compute-1 python3.9[89327]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:44:54 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 07 09:44:54 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 07 09:44:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:54 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:54 compute-1 ceph-mon[80077]: 11.a scrub starts
Dec 07 09:44:54 compute-1 ceph-mon[80077]: 11.a scrub ok
Dec 07 09:44:54 compute-1 ceph-mon[80077]: 12.12 deep-scrub starts
Dec 07 09:44:54 compute-1 ceph-mon[80077]: 12.12 deep-scrub ok
Dec 07 09:44:54 compute-1 ceph-mon[80077]: 8.14 scrub starts
Dec 07 09:44:54 compute-1 ceph-mon[80077]: 8.14 scrub ok
Dec 07 09:44:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:55 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:55 compute-1 sudo[89482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cadwagxcvkkniqwooqafdqtwmlurjysm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100694.9205785-94-145675313045693/AnsiballZ_command.py'
Dec 07 09:44:55 compute-1 sudo[89482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:44:55 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 07 09:44:55 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 07 09:44:55 compute-1 python3.9[89484]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:44:55 compute-1 sudo[89482]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:55 compute-1 sudo[89488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:44:55 compute-1 sudo[89488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:55 compute-1 sudo[89488]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:55 compute-1 sudo[89535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:44:55 compute-1 sudo[89535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:44:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:55.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:55 compute-1 sudo[89535]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:55 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80003490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:55 compute-1 ceph-mon[80077]: pgmap v18: 337 pgs: 2 peering, 335 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 2 objects/s recovering
Dec 07 09:44:55 compute-1 ceph-mon[80077]: 8.f scrub starts
Dec 07 09:44:55 compute-1 ceph-mon[80077]: 8.f scrub ok
Dec 07 09:44:55 compute-1 ceph-mon[80077]: 7.10 scrub starts
Dec 07 09:44:55 compute-1 ceph-mon[80077]: 7.10 scrub ok
Dec 07 09:44:55 compute-1 ceph-mon[80077]: 9.15 scrub starts
Dec 07 09:44:55 compute-1 ceph-mon[80077]: 9.15 scrub ok
Dec 07 09:44:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:44:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:44:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:56.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:44:56 compute-1 sudo[89686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkxnzoauxbvxrtliyiolpoevxqveyrbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100696.01598-130-165147570768826/AnsiballZ_stat.py'
Dec 07 09:44:56 compute-1 sudo[89686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:44:56 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.10 deep-scrub starts
Dec 07 09:44:56 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.10 deep-scrub ok
Dec 07 09:44:56 compute-1 python3.9[89688]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:44:56 compute-1 sudo[89686]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:56 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:57 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:57 compute-1 sudo[89840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-virptrvyxrshmmaoidsyftrtxcycicyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100696.9801095-163-152461094001420/AnsiballZ_file.py'
Dec 07 09:44:57 compute-1 sudo[89840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:44:57 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 07 09:44:57 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 07 09:44:57 compute-1 python3.9[89842]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:44:57 compute-1 sudo[89840]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:44:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:57.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:44:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:57 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:58 compute-1 sudo[89993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jejdheakpzirhjxcrevjhdenfwebohqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100697.8773694-190-253569910987532/AnsiballZ_file.py'
Dec 07 09:44:58 compute-1 sudo[89993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:44:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:44:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:44:58.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:44:58 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:44:58 compute-1 python3.9[89995]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:44:58 compute-1 sudo[89993]: pam_unix(sudo:session): session closed for user root
Dec 07 09:44:58 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 07 09:44:58 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 07 09:44:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:58 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80003490 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:59 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:44:59 compute-1 python3.9[90145]: ansible-ansible.builtin.service_facts Invoked
Dec 07 09:44:59 compute-1 ceph-mon[80077]: 12.18 scrub starts
Dec 07 09:44:59 compute-1 ceph-mon[80077]: 12.18 scrub ok
Dec 07 09:44:59 compute-1 ceph-mon[80077]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 07 09:44:59 compute-1 ceph-mon[80077]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 07 09:44:59 compute-1 ceph-mon[80077]: 5.a scrub starts
Dec 07 09:44:59 compute-1 ceph-mon[80077]: 5.a scrub ok
Dec 07 09:44:59 compute-1 ceph-mon[80077]: 9.10 deep-scrub starts
Dec 07 09:44:59 compute-1 ceph-mon[80077]: 9.10 deep-scrub ok
Dec 07 09:44:59 compute-1 network[90162]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 07 09:44:59 compute-1 network[90163]: 'network-scripts' will be removed from distribution in near future.
Dec 07 09:44:59 compute-1 network[90164]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 07 09:44:59 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 07 09:44:59 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 07 09:44:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:44:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.012000319s ======
Dec 07 09:44:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:44:59.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.012000319s
Dec 07 09:44:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:44:59 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:00.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:00 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 07 09:45:00 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 07 09:45:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:00 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:01 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80004920 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:01 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 07 09:45:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:45:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:01.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:45:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:01 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88002010 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:02 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 07 09:45:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:45:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:02.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:45:02 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 07 09:45:02 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 07 09:45:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:02 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:03 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:03 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:45:03 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 07 09:45:03 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 07 09:45:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:03.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:03 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88002010 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:04.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:04 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Dec 07 09:45:04 compute-1 ceph-mon[80077]: pgmap v19: 337 pgs: 2 peering, 335 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 59 B/s, 1 objects/s recovering
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 4.3 deep-scrub starts
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 4.3 deep-scrub ok
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 7.8 scrub starts
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 7.8 scrub ok
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 9.5 scrub starts
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 9.5 scrub ok
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 5.1c scrub starts
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 5.1c scrub ok
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 12.6 scrub starts
Dec 07 09:45:04 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:45:04 compute-1 ceph-mon[80077]: pgmap v20: 337 pgs: 2 peering, 335 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 50 B/s, 1 objects/s recovering
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 4.8 scrub starts
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 4.8 scrub ok
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 11.12 scrub starts
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 11.12 scrub ok
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 12.6 scrub ok
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 7.9 scrub starts
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 9.b scrub starts
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 9.b scrub ok
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 7.9 scrub ok
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 9.12 scrub starts
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 9.12 scrub ok
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 12.1c scrub starts
Dec 07 09:45:04 compute-1 ceph-mon[80077]: 12.1c scrub ok
Dec 07 09:45:04 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 07 09:45:04 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 07 09:45:04 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 94 pg[10.b( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=65/65 les/c/f=66/66/0 sis=94) [1] r=0 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:04 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 94 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=94) [1] r=0 lpr=94 pi=[64,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:04 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 94 pg[6.b( v 46'39 (0'0,46'39] local-lis/les=68/69 n=1 ec=53/21 lis/c=68/68 les/c/f=69/69/0 sis=94 pruub=12.407071114s) [0] r=-1 lpr=94 pi=[68,94)/1 crt=46'39 mlcod 46'39 active pruub 272.885253906s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:04 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 94 pg[6.b( v 46'39 (0'0,46'39] local-lis/les=68/69 n=1 ec=53/21 lis/c=68/68 les/c/f=69/69/0 sis=94 pruub=12.406301498s) [0] r=-1 lpr=94 pi=[68,94)/1 crt=46'39 mlcod 0'0 unknown NOTIFY pruub 272.885253906s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:04 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 07 09:45:04 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 07 09:45:04 compute-1 python3.9[90428]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:45:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:04 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80004920 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:05 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:05 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 5.1 deep-scrub starts
Dec 07 09:45:05 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 5.1 deep-scrub ok
Dec 07 09:45:05 compute-1 python3.9[90578]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:45:05 compute-1 ceph-mon[80077]: pgmap v21: 337 pgs: 1 active+clean+scrubbing, 336 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 8.9 scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 8.9 scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 4.d scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 4.d scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 5.5 scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 5.5 scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 4.6 deep-scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 4.6 deep-scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 11.1a scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 12.c scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 11.1a scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: pgmap v22: 337 pgs: 1 active+clean+scrubbing, 336 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 8.a scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 8.a scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 4.c scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 4.c scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 7.5 deep-scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 7.5 deep-scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 8.12 scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 8.12 scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 7.e scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 12.c scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 07 09:45:05 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 7.e scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 07 09:45:05 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 07 09:45:05 compute-1 ceph-mon[80077]: pgmap v23: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 9.8 scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 9.8 scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: osdmap e94: 3 total, 3 up, 3 in
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 5.1b scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 5.1b scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 07 09:45:05 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 12.10 scrub starts
Dec 07 09:45:05 compute-1 ceph-mon[80077]: 12.10 scrub ok
Dec 07 09:45:05 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:45:05 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:45:05 compute-1 ceph-mon[80077]: Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 07 09:45:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Dec 07 09:45:05 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 95 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[64,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:05 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 95 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=64/64 les/c/f=65/65/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[64,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:05 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 95 pg[10.b( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:05 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 95 pg[10.b( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:45:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:05.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:45:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:05 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:06.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:06 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 07 09:45:06 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 07 09:45:06 compute-1 ceph-mon[80077]: Reconfiguring daemon grafana.compute-0 on compute-0
Dec 07 09:45:06 compute-1 ceph-mon[80077]: 7.11 deep-scrub starts
Dec 07 09:45:06 compute-1 ceph-mon[80077]: 7.11 deep-scrub ok
Dec 07 09:45:06 compute-1 ceph-mon[80077]: 5.1 deep-scrub starts
Dec 07 09:45:06 compute-1 ceph-mon[80077]: 5.1 deep-scrub ok
Dec 07 09:45:06 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 07 09:45:06 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 07 09:45:06 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 07 09:45:06 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 07 09:45:06 compute-1 ceph-mon[80077]: osdmap e95: 3 total, 3 up, 3 in
Dec 07 09:45:06 compute-1 ceph-mon[80077]: 12.e scrub starts
Dec 07 09:45:06 compute-1 ceph-mon[80077]: 12.e scrub ok
Dec 07 09:45:06 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 07 09:45:06 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 07 09:45:06 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Dec 07 09:45:06 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 96 pg[10.1c( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=73/73 les/c/f=74/74/0 sis=96) [1] r=0 lpr=96 pi=[73,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:06 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 96 pg[10.c( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=73/73 les/c/f=74/74/0 sis=96) [1] r=0 lpr=96 pi=[73,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:06 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88002010 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:07 compute-1 python3.9[90733]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:45:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:07 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80004920 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:07 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Dec 07 09:45:07 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Dec 07 09:45:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:07.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:07 compute-1 ceph-mon[80077]: pgmap v26: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:07 compute-1 ceph-mon[80077]: 4.2 scrub starts
Dec 07 09:45:07 compute-1 ceph-mon[80077]: 4.2 scrub ok
Dec 07 09:45:07 compute-1 ceph-mon[80077]: 5.f scrub starts
Dec 07 09:45:07 compute-1 ceph-mon[80077]: 5.f scrub ok
Dec 07 09:45:07 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 07 09:45:07 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 07 09:45:07 compute-1 ceph-mon[80077]: osdmap e96: 3 total, 3 up, 3 in
Dec 07 09:45:07 compute-1 ceph-mon[80077]: 7.6 scrub starts
Dec 07 09:45:07 compute-1 ceph-mon[80077]: 7.6 scrub ok
Dec 07 09:45:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Dec 07 09:45:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 97 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=95/64 les/c/f=96/65/0 sis=97) [1] r=0 lpr=97 pi=[64,97)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 97 pg[10.c( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=73/73 les/c/f=74/74/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[73,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 97 pg[10.c( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=73/73 les/c/f=74/74/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[73,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 97 pg[10.b( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 97 pg[10.b( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 97 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=95/64 les/c/f=96/65/0 sis=97) [1] r=0 lpr=97 pi=[64,97)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 97 pg[10.1c( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=73/73 les/c/f=74/74/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[73,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:07 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 97 pg[10.1c( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=73/73 les/c/f=74/74/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[73,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:07 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80004920 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:07 compute-1 sudo[90889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pafapcrurxbdouthwsnupdhyzeztmwxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100707.605579-334-194088411571186/AnsiballZ_setup.py'
Dec 07 09:45:07 compute-1 sudo[90889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:45:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:08.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:08 compute-1 python3.9[90891]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:45:08 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:45:08 compute-1 sudo[90889]: pam_unix(sudo:session): session closed for user root
Dec 07 09:45:08 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec 07 09:45:08 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec 07 09:45:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:08 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:08 compute-1 sudo[90974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikfppkymqrrvxvkxxogvbegurkzdldyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100707.605579-334-194088411571186/AnsiballZ_dnf.py'
Dec 07 09:45:08 compute-1 sudo[90974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:45:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:09 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88002010 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:09 compute-1 python3.9[90976]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:45:09 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 07 09:45:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:45:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:09.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:45:09 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 07 09:45:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:09 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80004920 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:10.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:10 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 07 09:45:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:10 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:11 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 12.1a deep-scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 12.1a deep-scrub ok
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 8.10 deep-scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 8.10 deep-scrub ok
Dec 07 09:45:11 compute-1 ceph-mon[80077]: osdmap e97: 3 total, 3 up, 3 in
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 5.3 deep-scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 5.3 deep-scrub ok
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 8.18 scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 8.18 scrub ok
Dec 07 09:45:11 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 07 09:45:11 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Dec 07 09:45:11 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 07 09:45:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:11.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 98 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=97/98 n=5 ec=57/42 lis/c=95/64 les/c/f=96/65/0 sis=97) [1] r=0 lpr=97 pi=[64,97)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:45:11 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 98 pg[10.b( v 56'1095 (0'0,56'1095] local-lis/les=97/98 n=6 ec=57/42 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:45:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:11 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:11 compute-1 ceph-mon[80077]: pgmap v29: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 455 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 4.1 scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 4.1 scrub ok
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 12.b scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 12.b scrub ok
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 12.1e scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 12.1e scrub ok
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 8.1b scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 8.1b scrub ok
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 7.2 scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: pgmap v30: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 11.8 scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 11.8 scrub ok
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 9.6 scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 7.2 scrub ok
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 12.8 deep-scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 9.6 scrub ok
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 07 09:45:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 9.f scrub starts
Dec 07 09:45:11 compute-1 ceph-mon[80077]: osdmap e98: 3 total, 3 up, 3 in
Dec 07 09:45:11 compute-1 ceph-mon[80077]: 9.f scrub ok
Dec 07 09:45:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:12.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 98 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=98) [1] r=0 lpr=98 pi=[78,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:12 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 98 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=98) [1] r=0 lpr=98 pi=[78,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:12 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88002010 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:13 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:13 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:45:13 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Dec 07 09:45:13 compute-1 ceph-mon[80077]: 8.b scrub starts
Dec 07 09:45:13 compute-1 ceph-mon[80077]: 8.b scrub ok
Dec 07 09:45:13 compute-1 ceph-mon[80077]: 12.8 deep-scrub ok
Dec 07 09:45:13 compute-1 ceph-mon[80077]: 5.6 deep-scrub starts
Dec 07 09:45:13 compute-1 ceph-mon[80077]: 5.6 deep-scrub ok
Dec 07 09:45:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:45:13 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 99 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[78,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:13 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 99 pg[10.1d( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[78,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:13 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 99 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[78,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:13 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 99 pg[10.d( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=78/78 les/c/f=79/79/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[78,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:13.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:13 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:45:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:14.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:45:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:14 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:15 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88002010 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:15 compute-1 irqbalance[786]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 07 09:45:15 compute-1 irqbalance[786]: IRQ 26 affinity is now unmanaged
Dec 07 09:45:15 compute-1 systemd[83131]: Starting Mark boot as successful...
Dec 07 09:45:15 compute-1 systemd[83131]: Finished Mark boot as successful.
Dec 07 09:45:15 compute-1 sudo[91020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:45:15 compute-1 sudo[91020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:45:15 compute-1 sudo[91020]: pam_unix(sudo:session): session closed for user root
Dec 07 09:45:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:15.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:15 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:16.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:16 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:17 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:17.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:17 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:45:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:18.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:45:18 compute-1 ceph-mon[80077]: pgmap v32: 337 pgs: 2 remapped+peering, 2 peering, 333 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 0 objects/s recovering
Dec 07 09:45:18 compute-1 ceph-mon[80077]: 12.a scrub starts
Dec 07 09:45:18 compute-1 ceph-mon[80077]: 12.a scrub ok
Dec 07 09:45:18 compute-1 ceph-mon[80077]: osdmap e99: 3 total, 3 up, 3 in
Dec 07 09:45:18 compute-1 ceph-mon[80077]: 12.19 scrub starts
Dec 07 09:45:18 compute-1 ceph-mon[80077]: 12.19 scrub ok
Dec 07 09:45:18 compute-1 ceph-mds[85822]: mds.beacon.cephfs.compute-1.ihigcc missed beacon ack from the monitors
Dec 07 09:45:18 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 07 09:45:18 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 07 09:45:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:18 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:19 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:19 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 07 09:45:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:19.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:19 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001090 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:45:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:20.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:20 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec 07 09:45:20 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 07 09:45:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:21 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:21.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:21 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:22.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:22 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 07 09:45:22 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec 07 09:45:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/094522 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:45:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:22 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:23 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Dec 07 09:45:23 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 100 pg[10.1c( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=97/73 les/c/f=98/74/0 sis=100) [1] r=0 lpr=100 pi=[73,100)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:23 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 100 pg[10.1c( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=97/73 les/c/f=98/74/0 sis=100) [1] r=0 lpr=100 pi=[73,100)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:23 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 100 pg[10.c( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=97/73 les/c/f=98/74/0 sis=100) [1] r=0 lpr=100 pi=[73,100)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:23 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 100 pg[10.c( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=97/73 les/c/f=98/74/0 sis=100) [1] r=0 lpr=100 pi=[73,100)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:23 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 07 09:45:23 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 07 09:45:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:23.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:23 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 07 09:45:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:24.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:24 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:45:25 compute-1 ceph-mon[80077]: pgmap v34: 337 pgs: 1 active+remapped, 1 active+recovering+remapped, 2 unknown, 333 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3/205 objects misplaced (1.463%)
Dec 07 09:45:25 compute-1 ceph-mon[80077]: 5.14 deep-scrub starts
Dec 07 09:45:25 compute-1 ceph-mon[80077]: 5.14 deep-scrub ok
Dec 07 09:45:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:25 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:25.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:25 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:26.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:26 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:27 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Dec 07 09:45:27 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 101 pg[10.c( v 56'1095 (0'0,56'1095] local-lis/les=100/101 n=5 ec=57/42 lis/c=97/73 les/c/f=98/74/0 sis=100) [1] r=0 lpr=100 pi=[73,100)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:45:27 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 101 pg[10.1c( v 56'1095 (0'0,56'1095] local-lis/les=100/101 n=5 ec=57/42 lis/c=97/73 les/c/f=98/74/0 sis=100) [1] r=0 lpr=100 pi=[73,100)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:45:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:27.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:27 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:28 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 7.13 scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: pgmap v35: 337 pgs: 1 active+remapped, 1 active+recovering+remapped, 2 unknown, 333 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3/205 objects misplaced (1.463%)
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 7.13 scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 7.b scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: pgmap v36: 337 pgs: 1 active+remapped, 1 active+recovering+remapped, 2 unknown, 333 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3/205 objects misplaced (1.463%)
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 8.c scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 8.c scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 11.14 scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 11.14 scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 7.b scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 10.15 scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 12.13 deep-scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 12.13 deep-scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 10.10 scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: pgmap v37: 337 pgs: 2 active+clean+scrubbing, 2 active+remapped, 2 unknown, 331 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 4.15 scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 4.15 scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 10.16 scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 10.10 scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 10.18 scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 10.15 scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 7.1f scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 7.1f scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: pgmap v38: 337 pgs: 2 active+clean+scrubbing, 2 active+remapped, 2 unknown, 331 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 12.3 deep-scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 12.3 deep-scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 10.17 scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 10.16 scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 6.9 scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 10.18 scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: osdmap e100: 3 total, 3 up, 3 in
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 12.1d scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 6.9 scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 12.1d scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 10.7 scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 10.17 scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 10.7 scrub ok
Dec 07 09:45:29 compute-1 ceph-mon[80077]: pgmap v40: 337 pgs: 4 active+clean+scrubbing, 2 peering, 2 unknown, 329 active+clean; 454 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 12.4 scrub starts
Dec 07 09:45:29 compute-1 ceph-mon[80077]: 12.4 scrub ok
Dec 07 09:45:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:29 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:29.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:29 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:45:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:45:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:30.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:45:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:30 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:31 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:31 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Dec 07 09:45:31 compute-1 ceph-mon[80077]: 12.2 scrub starts
Dec 07 09:45:31 compute-1 ceph-mon[80077]: 12.2 scrub ok
Dec 07 09:45:31 compute-1 ceph-mon[80077]: pgmap v41: 337 pgs: 1 active+recovery_wait+remapped, 1 active+recovering+remapped, 2 active+clean+scrubbing, 2 peering, 331 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13/209 objects misplaced (6.220%)
Dec 07 09:45:31 compute-1 ceph-mon[80077]: 9.18 scrub starts
Dec 07 09:45:31 compute-1 ceph-mon[80077]: 9.18 scrub ok
Dec 07 09:45:31 compute-1 ceph-mon[80077]: osdmap e101: 3 total, 3 up, 3 in
Dec 07 09:45:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:45:31 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 102 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=99/78 les/c/f=100/79/0 sis=102) [1] r=0 lpr=102 pi=[78,102)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:31 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 102 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=6 ec=57/42 lis/c=99/78 les/c/f=100/79/0 sis=102) [1] r=0 lpr=102 pi=[78,102)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:31.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:31 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:32.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Dec 07 09:45:32 compute-1 ceph-mon[80077]: 8.16 scrub starts
Dec 07 09:45:32 compute-1 ceph-mon[80077]: 8.16 scrub ok
Dec 07 09:45:32 compute-1 ceph-mon[80077]: pgmap v43: 337 pgs: 1 active+recovery_wait+remapped, 1 active+recovering+remapped, 2 active+clean+scrubbing, 2 peering, 331 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 13/209 objects misplaced (6.220%)
Dec 07 09:45:32 compute-1 ceph-mon[80077]: 9.13 scrub starts
Dec 07 09:45:32 compute-1 ceph-mon[80077]: 9.13 scrub ok
Dec 07 09:45:32 compute-1 ceph-mon[80077]: 11.e deep-scrub starts
Dec 07 09:45:32 compute-1 ceph-mon[80077]: 11.e deep-scrub ok
Dec 07 09:45:32 compute-1 ceph-mon[80077]: pgmap v44: 337 pgs: 1 active+recovering+remapped, 1 active+remapped, 335 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3/208 objects misplaced (1.442%)
Dec 07 09:45:32 compute-1 ceph-mon[80077]: 8.d scrub starts
Dec 07 09:45:32 compute-1 ceph-mon[80077]: 8.d scrub ok
Dec 07 09:45:32 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 07 09:45:32 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 07 09:45:32 compute-1 ceph-mon[80077]: osdmap e102: 3 total, 3 up, 3 in
Dec 07 09:45:32 compute-1 ceph-mon[80077]: 12.9 scrub starts
Dec 07 09:45:32 compute-1 ceph-mon[80077]: 12.9 scrub ok
Dec 07 09:45:32 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 07 09:45:32 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 07 09:45:32 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 103 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=99/78 les/c/f=100/79/0 sis=103) [1] r=0 lpr=103 pi=[78,103)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:32 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 103 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=99/78 les/c/f=100/79/0 sis=103) [1] r=0 lpr=103 pi=[78,103)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:32 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 103 pg[6.e( empty local-lis/les=0/0 n=0 ec=53/21 lis/c=77/77 les/c/f=78/78/0 sis=103) [1] r=0 lpr=103 pi=[77,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:32 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 103 pg[10.d( v 56'1095 (0'0,56'1095] local-lis/les=102/103 n=6 ec=57/42 lis/c=99/78 les/c/f=100/79/0 sis=102) [1] r=0 lpr=102 pi=[78,102)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:45:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:32 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:33 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:33 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 07 09:45:33 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 07 09:45:33 compute-1 ceph-mon[80077]: pgmap v46: 337 pgs: 1 active+recovering+remapped, 1 active+remapped, 335 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3/208 objects misplaced (1.442%); 0 B/s, 2 objects/s recovering
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 07 09:45:33 compute-1 ceph-mon[80077]: osdmap e103: 3 total, 3 up, 3 in
Dec 07 09:45:33 compute-1 ceph-mon[80077]: 12.7 scrub starts
Dec 07 09:45:33 compute-1 ceph-mon[80077]: 12.7 scrub ok
Dec 07 09:45:33 compute-1 ceph-mon[80077]: 6.c scrub starts
Dec 07 09:45:33 compute-1 ceph-mon[80077]: 6.c scrub ok
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:45:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:45:33 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Dec 07 09:45:33 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 104 pg[6.e( v 46'39 lc 45'19 (0'0,46'39] local-lis/les=103/104 n=1 ec=53/21 lis/c=77/77 les/c/f=78/78/0 sis=103) [1] r=0 lpr=103 pi=[77,103)/1 crt=46'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:45:33 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 104 pg[10.1d( v 56'1095 (0'0,56'1095] local-lis/les=103/104 n=5 ec=57/42 lis/c=99/78 les/c/f=100/79/0 sis=103) [1] r=0 lpr=103 pi=[78,103)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:45:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:33.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:33 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:34.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:34 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 07 09:45:34 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 07 09:45:34 compute-1 ceph-mon[80077]: 10.1b scrub starts
Dec 07 09:45:34 compute-1 ceph-mon[80077]: 10.1b scrub ok
Dec 07 09:45:34 compute-1 ceph-mon[80077]: osdmap e104: 3 total, 3 up, 3 in
Dec 07 09:45:34 compute-1 ceph-mon[80077]: 11.3 scrub starts
Dec 07 09:45:34 compute-1 ceph-mon[80077]: 11.3 scrub ok
Dec 07 09:45:34 compute-1 ceph-mon[80077]: 10.5 scrub starts
Dec 07 09:45:34 compute-1 ceph-mon[80077]: 10.5 scrub ok
Dec 07 09:45:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:34 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:45:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:35 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:35 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 07 09:45:35 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 07 09:45:35 compute-1 ceph-mon[80077]: pgmap v49: 337 pgs: 2 peering, 335 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:35 compute-1 ceph-mon[80077]: 10.e scrub starts
Dec 07 09:45:35 compute-1 ceph-mon[80077]: 10.e scrub ok
Dec 07 09:45:35 compute-1 ceph-mon[80077]: 9.3 scrub starts
Dec 07 09:45:35 compute-1 ceph-mon[80077]: 9.3 scrub ok
Dec 07 09:45:35 compute-1 ceph-mon[80077]: 6.b deep-scrub starts
Dec 07 09:45:35 compute-1 ceph-mon[80077]: 6.b deep-scrub ok
Dec 07 09:45:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:35.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:35 compute-1 sudo[91087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:45:35 compute-1 sudo[91087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:45:35 compute-1 sudo[91087]: pam_unix(sudo:session): session closed for user root
Dec 07 09:45:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:35 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:36.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:36 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 07 09:45:36 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 07 09:45:36 compute-1 ceph-mon[80077]: 6.f scrub starts
Dec 07 09:45:36 compute-1 ceph-mon[80077]: 6.f scrub ok
Dec 07 09:45:36 compute-1 ceph-mon[80077]: 10.11 scrub starts
Dec 07 09:45:36 compute-1 ceph-mon[80077]: 10.11 scrub ok
Dec 07 09:45:36 compute-1 ceph-mon[80077]: 10.8 scrub starts
Dec 07 09:45:36 compute-1 ceph-mon[80077]: 10.8 scrub ok
Dec 07 09:45:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:36 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:37 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:37 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec 07 09:45:37 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec 07 09:45:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:37.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:37 compute-1 ceph-mon[80077]: pgmap v50: 337 pgs: 2 peering, 335 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:37 compute-1 ceph-mon[80077]: 10.f scrub starts
Dec 07 09:45:37 compute-1 ceph-mon[80077]: 10.f scrub ok
Dec 07 09:45:37 compute-1 ceph-mon[80077]: 10.13 scrub starts
Dec 07 09:45:37 compute-1 ceph-mon[80077]: 10.13 scrub ok
Dec 07 09:45:37 compute-1 ceph-mon[80077]: 6.0 scrub starts
Dec 07 09:45:37 compute-1 ceph-mon[80077]: 6.0 scrub ok
Dec 07 09:45:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:37 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:38.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:38 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 07 09:45:38 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 07 09:45:38 compute-1 sudo[91121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:45:38 compute-1 sudo[91121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:45:38 compute-1 sudo[91121]: pam_unix(sudo:session): session closed for user root
Dec 07 09:45:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:38 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:38 compute-1 ceph-mon[80077]: 10.0 scrub starts
Dec 07 09:45:38 compute-1 ceph-mon[80077]: 10.0 scrub ok
Dec 07 09:45:38 compute-1 ceph-mon[80077]: 6.1 scrub starts
Dec 07 09:45:38 compute-1 ceph-mon[80077]: 6.1 scrub ok
Dec 07 09:45:38 compute-1 ceph-mon[80077]: 10.2 scrub starts
Dec 07 09:45:38 compute-1 ceph-mon[80077]: 10.2 scrub ok
Dec 07 09:45:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:45:38 compute-1 ceph-mon[80077]: 10.a scrub starts
Dec 07 09:45:38 compute-1 ceph-mon[80077]: 10.a scrub ok
Dec 07 09:45:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:45:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:39 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:39 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 07 09:45:39 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 07 09:45:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:39.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:39 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:45:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:40.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:40 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts
Dec 07 09:45:40 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 6.8 deep-scrub ok
Dec 07 09:45:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:40 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:41 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:41 compute-1 ceph-mon[80077]: pgmap v51: 337 pgs: 2 peering, 335 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:41 compute-1 ceph-mon[80077]: 10.3 scrub starts
Dec 07 09:45:41 compute-1 ceph-mon[80077]: 10.3 scrub ok
Dec 07 09:45:41 compute-1 ceph-mon[80077]: 6.4 scrub starts
Dec 07 09:45:41 compute-1 ceph-mon[80077]: 6.4 scrub ok
Dec 07 09:45:41 compute-1 ceph-mon[80077]: 10.b scrub starts
Dec 07 09:45:41 compute-1 ceph-mon[80077]: 10.b scrub ok
Dec 07 09:45:41 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 07 09:45:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:41.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:41 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 07 09:45:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:41 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:45:41 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Dec 07 09:45:41 compute-1 ceph-mon[80077]: 10.1 scrub starts
Dec 07 09:45:41 compute-1 ceph-mon[80077]: 10.1 scrub ok
Dec 07 09:45:41 compute-1 ceph-mon[80077]: pgmap v52: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:41 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 07 09:45:41 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 07 09:45:41 compute-1 ceph-mon[80077]: 6.8 deep-scrub starts
Dec 07 09:45:41 compute-1 ceph-mon[80077]: 6.8 deep-scrub ok
Dec 07 09:45:41 compute-1 ceph-mon[80077]: 10.6 scrub starts
Dec 07 09:45:41 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 07 09:45:41 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 07 09:45:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:41 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:42 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:45:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:42 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:45:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:42 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:45:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:45:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:42.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:45:42 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 07 09:45:42 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 07 09:45:42 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 105 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=82/83 n=7 ec=57/42 lis/c=82/82 les/c/f=83/83/0 sis=105 pruub=9.629149437s) [2] r=-1 lpr=105 pi=[82,105)/1 crt=56'1095 mlcod 0'0 active pruub 308.168182373s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:42 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 105 pg[6.f( v 46'39 (0'0,46'39] local-lis/les=68/69 n=3 ec=53/21 lis/c=68/68 les/c/f=69/69/0 sis=105 pruub=14.347811699s) [0] r=-1 lpr=105 pi=[68,105)/1 crt=46'39 mlcod 46'39 active pruub 312.886810303s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:42 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 105 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=82/83 n=7 ec=57/42 lis/c=82/82 les/c/f=83/83/0 sis=105 pruub=9.629088402s) [2] r=-1 lpr=105 pi=[82,105)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 308.168182373s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:42 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 105 pg[6.f( v 46'39 (0'0,46'39] local-lis/les=68/69 n=3 ec=53/21 lis/c=68/68 les/c/f=69/69/0 sis=105 pruub=14.347734451s) [0] r=-1 lpr=105 pi=[68,105)/1 crt=46'39 mlcod 0'0 unknown NOTIFY pruub 312.886810303s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:42 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 105 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=82/83 n=5 ec=57/42 lis/c=82/82 les/c/f=83/83/0 sis=105 pruub=9.627795219s) [2] r=-1 lpr=105 pi=[82,105)/1 crt=56'1095 mlcod 0'0 active pruub 308.168151855s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:42 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 105 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=82/83 n=5 ec=57/42 lis/c=82/82 les/c/f=83/83/0 sis=105 pruub=9.627765656s) [2] r=-1 lpr=105 pi=[82,105)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 308.168151855s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:42 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:42 compute-1 ceph-mon[80077]: 10.6 scrub ok
Dec 07 09:45:42 compute-1 ceph-mon[80077]: osdmap e105: 3 total, 3 up, 3 in
Dec 07 09:45:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 07 09:45:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:45:42 compute-1 ceph-mon[80077]: 10.1a scrub starts
Dec 07 09:45:42 compute-1 ceph-mon[80077]: 10.1a scrub ok
Dec 07 09:45:43 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Dec 07 09:45:43 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 106 pg[10.10( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=2 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=106 pruub=8.331460953s) [2] r=-1 lpr=106 pi=[57,106)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 307.314849854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:43 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 106 pg[10.10( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=2 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=106 pruub=8.331407547s) [2] r=-1 lpr=106 pi=[57,106)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 307.314849854s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:43 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 106 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=82/83 n=7 ec=57/42 lis/c=82/82 les/c/f=83/83/0 sis=106) [2]/[1] r=0 lpr=106 pi=[82,106)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:43 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 106 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=82/83 n=7 ec=57/42 lis/c=82/82 les/c/f=83/83/0 sis=106) [2]/[1] r=0 lpr=106 pi=[82,106)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:43 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 106 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=82/83 n=5 ec=57/42 lis/c=82/82 les/c/f=83/83/0 sis=106) [2]/[1] r=0 lpr=106 pi=[82,106)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:43 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 106 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=82/83 n=5 ec=57/42 lis/c=82/82 les/c/f=83/83/0 sis=106) [2]/[1] r=0 lpr=106 pi=[82,106)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:43 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:43 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 07 09:45:43 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 07 09:45:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:43.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:43 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:44.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:44 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 07 09:45:44 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 07 09:45:44 compute-1 ceph-mon[80077]: pgmap v54: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:44 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 07 09:45:44 compute-1 ceph-mon[80077]: osdmap e106: 3 total, 3 up, 3 in
Dec 07 09:45:44 compute-1 ceph-mon[80077]: 10.d scrub starts
Dec 07 09:45:44 compute-1 ceph-mon[80077]: 10.d scrub ok
Dec 07 09:45:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:44 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:44 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Dec 07 09:45:44 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 107 pg[10.10( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=2 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=107) [2]/[1] r=0 lpr=107 pi=[57,107)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:44 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 107 pg[10.10( v 56'1095 (0'0,56'1095] local-lis/les=57/58 n=2 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=107) [2]/[1] r=0 lpr=107 pi=[57,107)/1 crt=56'1095 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:45:44 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 107 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=106/107 n=5 ec=57/42 lis/c=82/82 les/c/f=83/83/0 sis=106) [2]/[1] async=[2] r=0 lpr=106 pi=[82,106)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:45:44 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 107 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=106/107 n=7 ec=57/42 lis/c=82/82 les/c/f=83/83/0 sis=106) [2]/[1] async=[2] r=0 lpr=106 pi=[82,106)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:45:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:45:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:45 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:45:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:45 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:45.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:45 compute-1 ceph-mon[80077]: pgmap v56: 337 pgs: 1 unknown, 2 remapped+peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:45 compute-1 ceph-mon[80077]: 10.1c scrub starts
Dec 07 09:45:45 compute-1 ceph-mon[80077]: 10.1c scrub ok
Dec 07 09:45:45 compute-1 ceph-mon[80077]: osdmap e107: 3 total, 3 up, 3 in
Dec 07 09:45:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:45 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:46 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Dec 07 09:45:46 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 108 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=106/107 n=7 ec=57/42 lis/c=106/82 les/c/f=107/83/0 sis=108 pruub=14.925094604s) [2] async=[2] r=-1 lpr=108 pi=[82,108)/1 crt=56'1095 mlcod 56'1095 active pruub 316.890106201s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:46 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 108 pg[10.f( v 56'1095 (0'0,56'1095] local-lis/les=106/107 n=7 ec=57/42 lis/c=106/82 les/c/f=107/83/0 sis=108 pruub=14.924998283s) [2] r=-1 lpr=108 pi=[82,108)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 316.890106201s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:46 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 108 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=106/107 n=5 ec=57/42 lis/c=106/82 les/c/f=107/83/0 sis=108 pruub=14.923913956s) [2] async=[2] r=-1 lpr=108 pi=[82,108)/1 crt=56'1095 mlcod 56'1095 active pruub 316.890075684s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:46 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 108 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=106/107 n=5 ec=57/42 lis/c=106/82 les/c/f=107/83/0 sis=108 pruub=14.923781395s) [2] r=-1 lpr=108 pi=[82,108)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 316.890075684s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:46 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 108 pg[10.10( v 56'1095 (0'0,56'1095] local-lis/les=107/108 n=2 ec=57/42 lis/c=57/57 les/c/f=58/58/0 sis=107) [2]/[1] async=[2] r=0 lpr=107 pi=[57,107)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:45:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:46.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:46 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Dec 07 09:45:47 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 109 pg[10.10( v 56'1095 (0'0,56'1095] local-lis/les=107/108 n=2 ec=57/42 lis/c=107/57 les/c/f=108/58/0 sis=109 pruub=14.931024551s) [2] async=[2] r=-1 lpr=109 pi=[57,109)/1 crt=56'1095 lcod 0'0 mlcod 0'0 active pruub 318.084197998s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:45:47 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 109 pg[10.10( v 56'1095 (0'0,56'1095] local-lis/les=107/108 n=2 ec=57/42 lis/c=107/57 les/c/f=108/58/0 sis=109 pruub=14.930963516s) [2] r=-1 lpr=109 pi=[57,109)/1 crt=56'1095 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 318.084197998s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:45:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:47 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:47 compute-1 ceph-mon[80077]: osdmap e108: 3 total, 3 up, 3 in
Dec 07 09:45:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:47.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:47 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:45:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:48.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:45:48 compute-1 ceph-mon[80077]: pgmap v59: 337 pgs: 1 unknown, 2 remapped+peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:48 compute-1 ceph-mon[80077]: osdmap e109: 3 total, 3 up, 3 in
Dec 07 09:45:48 compute-1 ceph-mon[80077]: 10.1f scrub starts
Dec 07 09:45:48 compute-1 ceph-mon[80077]: 10.1f scrub ok
Dec 07 09:45:48 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Dec 07 09:45:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/094548 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:45:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:48 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:49 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:49.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:49 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:45:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:50.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:45:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:45:50 compute-1 ceph-mon[80077]: pgmap v61: 337 pgs: 1 unknown, 2 remapped+peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:50 compute-1 ceph-mon[80077]: osdmap e110: 3 total, 3 up, 3 in
Dec 07 09:45:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:50 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:51 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:51 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 07 09:45:51 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 07 09:45:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:51.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:51 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:52 compute-1 ceph-mon[80077]: pgmap v63: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:45:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:52.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:45:52 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.1e deep-scrub starts
Dec 07 09:45:52 compute-1 ceph-osd[77581]: log_channel(cluster) log [DBG] : 10.1e deep-scrub ok
Dec 07 09:45:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:52 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:53 compute-1 ceph-mon[80077]: 10.1d scrub starts
Dec 07 09:45:53 compute-1 ceph-mon[80077]: 10.1d scrub ok
Dec 07 09:45:53 compute-1 ceph-mon[80077]: 10.1e deep-scrub starts
Dec 07 09:45:53 compute-1 ceph-mon[80077]: 10.1e deep-scrub ok
Dec 07 09:45:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:53 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:53.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:53 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:54 compute-1 ceph-mon[80077]: pgmap v64: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:45:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:54.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:45:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:54 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Dec 07 09:45:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 07 09:45:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:55 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:45:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:45:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:55.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:45:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:55 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:55 compute-1 sudo[91225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:45:55 compute-1 sudo[91225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:45:55 compute-1 sudo[91225]: pam_unix(sudo:session): session closed for user root
Dec 07 09:45:56 compute-1 ceph-mon[80077]: pgmap v65: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:56 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 07 09:45:56 compute-1 ceph-mon[80077]: osdmap e111: 3 total, 3 up, 3 in
Dec 07 09:45:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:45:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:56.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:45:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:56 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:57 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:45:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:45:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:57 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 07 09:45:58 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Dec 07 09:45:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:45:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:45:58.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:45:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:58 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:59 compute-1 ceph-mon[80077]: pgmap v67: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:45:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:45:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 07 09:45:59 compute-1 ceph-mon[80077]: osdmap e112: 3 total, 3 up, 3 in
Dec 07 09:45:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Dec 07 09:45:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:59 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:45:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:45:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:45:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:45:59.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:45:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:45:59 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:46:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:00.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:46:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:46:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:00 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:00 compute-1 ceph-mon[80077]: pgmap v69: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:46:00 compute-1 ceph-mon[80077]: osdmap e113: 3 total, 3 up, 3 in
Dec 07 09:46:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Dec 07 09:46:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:01 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:01.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:01 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:02 compute-1 ceph-mon[80077]: pgmap v71: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:46:02 compute-1 ceph-mon[80077]: osdmap e114: 3 total, 3 up, 3 in
Dec 07 09:46:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:46:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:02.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:46:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Dec 07 09:46:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:02 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:03 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:03 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Dec 07 09:46:03 compute-1 ceph-mon[80077]: pgmap v73: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:46:03 compute-1 ceph-mon[80077]: osdmap e115: 3 total, 3 up, 3 in
Dec 07 09:46:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:03.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:03 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:04 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:04 compute-1 ceph-mon[80077]: osdmap e116: 3 total, 3 up, 3 in
Dec 07 09:46:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:05 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:46:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:05.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:05 compute-1 ceph-mon[80077]: pgmap v76: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 584 B/s rd, 0 op/s; 20 B/s, 0 objects/s recovering
Dec 07 09:46:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:05 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:06.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:06 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800a340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:07 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:07.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:07 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:08 compute-1 ceph-mon[80077]: pgmap v77: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 07 09:46:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:08.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:08 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:09 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Dec 07 09:46:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 07 09:46:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:09 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:09 compute-1 sudo[90974]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:09.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:09 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:10 compute-1 ceph-mon[80077]: pgmap v78: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Dec 07 09:46:10 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 07 09:46:10 compute-1 ceph-mon[80077]: osdmap e117: 3 total, 3 up, 3 in
Dec 07 09:46:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Dec 07 09:46:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:10.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 07 09:46:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:10 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:11 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Dec 07 09:46:11 compute-1 ceph-mon[80077]: osdmap e118: 3 total, 3 up, 3 in
Dec 07 09:46:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:11 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50000d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:11.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:11 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:12 compute-1 ceph-mon[80077]: pgmap v81: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 466 B/s rd, 0 op/s; 16 B/s, 0 objects/s recovering
Dec 07 09:46:12 compute-1 ceph-mon[80077]: osdmap e119: 3 total, 3 up, 3 in
Dec 07 09:46:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Dec 07 09:46:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:12.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:12 compute-1 sudo[91409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygbxunqevlfckepjqglbmkexwtzuctcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100772.3613813-370-212412703015154/AnsiballZ_command.py'
Dec 07 09:46:12 compute-1 sudo[91409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:12 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:12 compute-1 python3.9[91411]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:46:13 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Dec 07 09:46:13 compute-1 ceph-mon[80077]: osdmap e120: 3 total, 3 up, 3 in
Dec 07 09:46:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:46:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:13 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:13 compute-1 sudo[91409]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:13.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:13 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50001820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:14 compute-1 ceph-mon[80077]: pgmap v84: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 07 09:46:14 compute-1 ceph-mon[80077]: osdmap e121: 3 total, 3 up, 3 in
Dec 07 09:46:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:14.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:14 compute-1 sudo[91697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmelfxnuhxsjtygdkdbakrdevlgdncxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100774.0615413-394-245130170215434/AnsiballZ_selinux.py'
Dec 07 09:46:14 compute-1 sudo[91697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:14 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:14 compute-1 python3.9[91699]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 07 09:46:14 compute-1 sudo[91697]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:15 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:46:15 compute-1 sudo[91849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yynbqwukdvmkoafvqjagjkkyypquoilx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100775.4471858-427-107562026868481/AnsiballZ_command.py'
Dec 07 09:46:15 compute-1 sudo[91849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:15.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:15 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:15 compute-1 python3.9[91851]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 07 09:46:16 compute-1 sudo[91849]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:16 compute-1 sudo[91852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:46:16 compute-1 sudo[91852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:46:16 compute-1 sudo[91852]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:16 compute-1 ceph-mon[80077]: pgmap v86: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:46:16 compute-1 ceph-mon[80077]: mgrmap e34: compute-0.dotugk(active, since 92s), standbys: compute-2.ntknug, compute-1.buauyv
Dec 07 09:46:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:46:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:16.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:46:16 compute-1 sudo[92027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlizjevhsjtyjafqfctcushvwluwkgwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100776.2681506-451-218386551126197/AnsiballZ_file.py'
Dec 07 09:46:16 compute-1 sudo[92027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:16 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50001820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:16 compute-1 python3.9[92029]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:46:16 compute-1 sudo[92027]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:17 compute-1 ceph-mon[80077]: pgmap v87: 337 pgs: 337 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s wr, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 07 09:46:17 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 07 09:46:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Dec 07 09:46:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:17 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:17 compute-1 sudo[92179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjulwfdqcwlwgrzycanwkdsyjbebjpif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100777.312173-475-51547680961485/AnsiballZ_mount.py'
Dec 07 09:46:17 compute-1 sudo[92179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:17.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:17 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Dec 07 09:46:17 compute-1 python3.9[92181]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 07 09:46:17 compute-1 sudo[92179]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:18 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 07 09:46:18 compute-1 ceph-mon[80077]: osdmap e122: 3 total, 3 up, 3 in
Dec 07 09:46:18 compute-1 ceph-mon[80077]: osdmap e123: 3 total, 3 up, 3 in
Dec 07 09:46:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:18.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:18 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:18 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Dec 07 09:46:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:19 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50001820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:19 compute-1 sudo[92333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxbhvvixodhikmcwjnpvvhaihggqscxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100779.1626153-559-267870593697748/AnsiballZ_file.py'
Dec 07 09:46:19 compute-1 sudo[92333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:19 compute-1 python3.9[92335]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:46:19 compute-1 sudo[92333]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:19.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:19 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f740014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Dec 07 09:46:19 compute-1 ceph-mon[80077]: pgmap v90: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s wr, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 07 09:46:19 compute-1 ceph-mon[80077]: osdmap e124: 3 total, 3 up, 3 in
Dec 07 09:46:20 compute-1 sudo[92486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvquovfrqedxzhcgjkalbopneoeyexxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100779.9983878-583-106641245311384/AnsiballZ_stat.py'
Dec 07 09:46:20 compute-1 sudo[92486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:20.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:46:20 compute-1 python3.9[92488]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:46:20 compute-1 sudo[92486]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:20 compute-1 sudo[92564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qivljgmmkrstqhkqviirfydugzamyffi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100779.9983878-583-106641245311384/AnsiballZ_file.py'
Dec 07 09:46:20 compute-1 sudo[92564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:21 compute-1 ceph-mon[80077]: osdmap e125: 3 total, 3 up, 3 in
Dec 07 09:46:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Dec 07 09:46:21 compute-1 python3.9[92566]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:46:21 compute-1 sudo[92564]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:21 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:21.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:21 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:22 compute-1 ceph-mon[80077]: pgmap v93: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 07 09:46:22 compute-1 ceph-mon[80077]: osdmap e126: 3 total, 3 up, 3 in
Dec 07 09:46:22 compute-1 sudo[92717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnjmtmqfiravahcfepldafgepdyevtaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100781.8014836-646-275087389039532/AnsiballZ_stat.py'
Dec 07 09:46:22 compute-1 sudo[92717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:22 compute-1 python3.9[92719]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:46:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:22.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:22 compute-1 sudo[92717]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:22 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f740014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:23 compute-1 sudo[92871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxvnibiryqbduyvdemnpzwbwrbhrntjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100782.999095-685-276541364414694/AnsiballZ_getent.py'
Dec 07 09:46:23 compute-1 sudo[92871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:23 compute-1 python3.9[92873]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 07 09:46:23 compute-1 sudo[92871]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:23.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:23 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:24 compute-1 ceph-mon[80077]: pgmap v95: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 701 B/s rd, 0 op/s
Dec 07 09:46:24 compute-1 sudo[93025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utssvnlmyyrdovnnwqnaqvfwlojyabmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100784.0383227-715-19634983620038/AnsiballZ_getent.py'
Dec 07 09:46:24 compute-1 sudo[93025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:24.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:24 compute-1 python3.9[93027]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 07 09:46:24 compute-1 sudo[93025]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:24 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:25 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f740014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:46:25 compute-1 sudo[93178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlkapthirwfvgbbumdcgehnmcdmnsqwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100784.9052925-739-263424564428912/AnsiballZ_group.py'
Dec 07 09:46:25 compute-1 sudo[93178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:25 compute-1 python3.9[93180]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 07 09:46:25 compute-1 sudo[93178]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:25.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:25 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:26 compute-1 ceph-mon[80077]: pgmap v96: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:46:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:26.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:26 compute-1 sudo[93331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cigzecqeiqwffayonxpesrprjylmgeqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100786.102831-766-603048788186/AnsiballZ_file.py'
Dec 07 09:46:26 compute-1 sudo[93331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:26 compute-1 python3.9[93333]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 07 09:46:26 compute-1 sudo[93331]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:26 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:27 compute-1 ceph-mon[80077]: pgmap v97: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 416 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Dec 07 09:46:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 07 09:46:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Dec 07 09:46:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:27 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:27 compute-1 sudo[93483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzvxqfqahwiuvntasamplhkpsrkzmbjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100787.1421423-799-110448831219787/AnsiballZ_dnf.py'
Dec 07 09:46:27 compute-1 sudo[93483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:27 compute-1 python3.9[93485]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:46:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:27.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:27 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f740014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 07 09:46:28 compute-1 ceph-mon[80077]: osdmap e127: 3 total, 3 up, 3 in
Dec 07 09:46:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:46:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:28.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:28 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Dec 07 09:46:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:28 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:29 compute-1 sudo[93483]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:29 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:29 compute-1 ceph-mon[80077]: pgmap v99: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 07 09:46:29 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 07 09:46:29 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 07 09:46:29 compute-1 ceph-mon[80077]: osdmap e128: 3 total, 3 up, 3 in
Dec 07 09:46:29 compute-1 sudo[93637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miehpqgaajgobhknrhuayedvpdwfonvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100789.4010544-824-30120076879889/AnsiballZ_file.py'
Dec 07 09:46:29 compute-1 sudo[93637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:29.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:29 compute-1 python3.9[93639]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:46:29 compute-1 sudo[93637]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:29 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:30.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 07 09:46:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:46:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Dec 07 09:46:30 compute-1 sudo[93790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azeptdudnvabhmqjzuttfqqpeniqakfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100790.2429469-847-1201011701386/AnsiballZ_stat.py'
Dec 07 09:46:30 compute-1 sudo[93790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:30 compute-1 python3.9[93792]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:46:30 compute-1 sudo[93790]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:30 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f740014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:31 compute-1 sudo[93868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lftyqpmnejpnxstrfywctnvltxyuwvng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100790.2429469-847-1201011701386/AnsiballZ_file.py'
Dec 07 09:46:31 compute-1 sudo[93868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:31 compute-1 python3.9[93870]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:46:31 compute-1 sudo[93868]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:31 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:31 compute-1 ceph-mon[80077]: pgmap v101: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Dec 07 09:46:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 07 09:46:31 compute-1 ceph-mon[80077]: osdmap e129: 3 total, 3 up, 3 in
Dec 07 09:46:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:31.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:31 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:31 compute-1 sudo[94020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqkfhvkxiorstlhsbenfdarxtpwgvujk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100791.634446-886-23105933515320/AnsiballZ_stat.py'
Dec 07 09:46:31 compute-1 sudo[94020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:32 compute-1 python3.9[94022]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:46:32 compute-1 sudo[94020]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:32.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:32 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 07 09:46:32 compute-1 sudo[94099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teqbypewndledzfbwpkdawaijvmkihsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100791.634446-886-23105933515320/AnsiballZ_file.py'
Dec 07 09:46:32 compute-1 sudo[94099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Dec 07 09:46:32 compute-1 python3.9[94101]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:46:32 compute-1 sudo[94099]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:32 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:33 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74001670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:33 compute-1 sudo[94251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtlpkrrittsfqvsdltfhjisjfrqbjltz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100793.2236693-931-82420358994330/AnsiballZ_dnf.py'
Dec 07 09:46:33 compute-1 sudo[94251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:33 compute-1 python3.9[94253]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:46:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:33.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:33 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c0041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:34 compute-1 ceph-mon[80077]: pgmap v103: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:46:34 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 07 09:46:34 compute-1 ceph-mon[80077]: osdmap e130: 3 total, 3 up, 3 in
Dec 07 09:46:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:34.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:34 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:35 compute-1 sudo[94251]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:35 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Dec 07 09:46:35 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 131 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=90/91 n=7 ec=57/42 lis/c=90/90 les/c/f=91/91/0 sis=131 pruub=8.442426682s) [0] r=-1 lpr=131 pi=[90,131)/1 crt=56'1095 mlcod 0'0 active pruub 359.737030029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:46:35 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 131 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=90/91 n=7 ec=57/42 lis/c=90/90 les/c/f=91/91/0 sis=131 pruub=8.442338943s) [0] r=-1 lpr=131 pi=[90,131)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 359.737030029s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:46:35 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 07 09:46:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:46:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:35.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:35 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74001690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:36 compute-1 python3.9[94405]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:46:36 compute-1 sudo[94406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:46:36 compute-1 sudo[94406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:46:36 compute-1 sudo[94406]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:36 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Dec 07 09:46:36 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 132 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=90/91 n=7 ec=57/42 lis/c=90/90 les/c/f=91/91/0 sis=132) [0]/[1] r=0 lpr=132 pi=[90,132)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:46:36 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 132 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=90/91 n=7 ec=57/42 lis/c=90/90 les/c/f=91/91/0 sis=132) [0]/[1] r=0 lpr=132 pi=[90,132)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:46:36 compute-1 ceph-mon[80077]: pgmap v105: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:46:36 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 07 09:46:36 compute-1 ceph-mon[80077]: osdmap e131: 3 total, 3 up, 3 in
Dec 07 09:46:36 compute-1 ceph-mon[80077]: osdmap e132: 3 total, 3 up, 3 in
Dec 07 09:46:36 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 07 09:46:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:36.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:36 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c0041d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:37 compute-1 python3.9[94583]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 07 09:46:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:37 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Dec 07 09:46:37 compute-1 ceph-mon[80077]: pgmap v108: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:46:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 07 09:46:37 compute-1 ceph-mon[80077]: osdmap e133: 3 total, 3 up, 3 in
Dec 07 09:46:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 133 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=132/133 n=7 ec=57/42 lis/c=90/90 les/c/f=91/91/0 sis=132) [0]/[1] async=[0] r=0 lpr=132 pi=[90,132)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:46:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Dec 07 09:46:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 134 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=132/133 n=7 ec=57/42 lis/c=132/90 les/c/f=133/91/0 sis=134 pruub=15.685693741s) [0] async=[0] r=-1 lpr=134 pi=[90,134)/1 crt=56'1095 mlcod 56'1095 active pruub 369.387725830s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:46:37 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 134 pg[10.19( v 56'1095 (0'0,56'1095] local-lis/les=132/133 n=7 ec=57/42 lis/c=132/90 les/c/f=133/91/0 sis=134 pruub=15.685271263s) [0] r=-1 lpr=134 pi=[90,134)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 369.387725830s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:46:37 compute-1 python3.9[94733]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:46:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:37.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:37 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f64001fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:38.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:38 compute-1 ceph-mon[80077]: osdmap e134: 3 total, 3 up, 3 in
Dec 07 09:46:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 07 09:46:38 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Dec 07 09:46:38 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 135 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=97/98 n=2 ec=57/42 lis/c=97/97 les/c/f=98/98/0 sis=135 pruub=9.063591957s) [0] r=-1 lpr=135 pi=[97,135)/1 crt=56'1095 mlcod 0'0 active pruub 363.794708252s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:46:38 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 135 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=97/98 n=2 ec=57/42 lis/c=97/97 les/c/f=98/98/0 sis=135 pruub=9.062810898s) [0] r=-1 lpr=135 pi=[97,135)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 363.794708252s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:46:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:38 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:38 compute-1 sudo[94811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:46:38 compute-1 sudo[94811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:46:38 compute-1 sudo[94811]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:39 compute-1 sudo[94846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:46:39 compute-1 sudo[94846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:46:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:39 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c0041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:39 compute-1 sudo[94949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgzaggibtxjmmdbtkqjbebsjzxczeebx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100798.5631194-1054-125115317671698/AnsiballZ_systemd.py'
Dec 07 09:46:39 compute-1 sudo[94949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:39 compute-1 sudo[94846]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:39 compute-1 python3.9[94953]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:46:39 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 07 09:46:39 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Dec 07 09:46:39 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 07 09:46:39 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 07 09:46:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:39.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:39 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:40 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 07 09:46:40 compute-1 sudo[94949]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:40.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:46:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:40 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:40 compute-1 python3.9[95130]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 07 09:46:41 compute-1 ceph-mon[80077]: pgmap v111: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:46:41 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 07 09:46:41 compute-1 ceph-mon[80077]: osdmap e135: 3 total, 3 up, 3 in
Dec 07 09:46:41 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Dec 07 09:46:41 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 136 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=97/98 n=2 ec=57/42 lis/c=97/97 les/c/f=98/98/0 sis=136) [0]/[1] r=0 lpr=136 pi=[97,136)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:46:41 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 136 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=97/98 n=2 ec=57/42 lis/c=97/97 les/c/f=98/98/0 sis=136) [0]/[1] r=0 lpr=136 pi=[97,136)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:46:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:41 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:41.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:41 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004210 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:42.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:42 compute-1 ceph-mon[80077]: pgmap v113: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 765 B/s rd, 0 op/s; 27 B/s, 1 objects/s recovering
Dec 07 09:46:42 compute-1 ceph-mon[80077]: osdmap e136: 3 total, 3 up, 3 in
Dec 07 09:46:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Dec 07 09:46:42 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 137 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=136/137 n=2 ec=57/42 lis/c=97/97 les/c/f=98/98/0 sis=136) [0]/[1] async=[0] r=0 lpr=136 pi=[97,136)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:46:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:42 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Dec 07 09:46:42 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 138 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=136/137 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138 pruub=15.680373192s) [0] async=[0] r=-1 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 56'1095 active pruub 374.595794678s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:46:42 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 138 pg[10.1b( v 56'1095 (0'0,56'1095] local-lis/les=136/137 n=2 ec=57/42 lis/c=136/97 les/c/f=137/98/0 sis=138 pruub=15.680295944s) [0] r=-1 lpr=138 pi=[97,138)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 374.595794678s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:46:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:43 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:43 compute-1 ceph-mon[80077]: pgmap v115: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 615 B/s rd, 0 op/s; 22 B/s, 1 objects/s recovering
Dec 07 09:46:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:46:43 compute-1 ceph-mon[80077]: osdmap e137: 3 total, 3 up, 3 in
Dec 07 09:46:43 compute-1 ceph-mon[80077]: osdmap e138: 3 total, 3 up, 3 in
Dec 07 09:46:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:43.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:43 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Dec 07 09:46:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:43 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:44.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:44 compute-1 sudo[95282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcubndgdqqtbmydwtjdvvfustvwrvhsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100804.241382-1225-231521340136841/AnsiballZ_systemd.py'
Dec 07 09:46:44 compute-1 sudo[95282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:44 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:44 compute-1 python3.9[95284]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:46:44 compute-1 ceph-mon[80077]: osdmap e139: 3 total, 3 up, 3 in
Dec 07 09:46:44 compute-1 sudo[95282]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:45 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:45 compute-1 sudo[95436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlcvghsrpjsxhmwekvjgicjqnlippgfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100805.1125517-1225-197299574109052/AnsiballZ_systemd.py'
Dec 07 09:46:45 compute-1 sudo[95436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:45 compute-1 python3.9[95438]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:46:45 compute-1 sudo[95436]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:46:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:45.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:45 compute-1 ceph-mon[80077]: pgmap v119: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec 07 09:46:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:45 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.254058) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100806254117, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3249, "num_deletes": 252, "total_data_size": 10469496, "memory_usage": 10813040, "flush_reason": "Manual Compaction"}
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100806342239, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 6694892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7559, "largest_seqno": 10803, "table_properties": {"data_size": 6680517, "index_size": 9205, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4101, "raw_key_size": 37612, "raw_average_key_size": 23, "raw_value_size": 6648584, "raw_average_value_size": 4066, "num_data_blocks": 396, "num_entries": 1635, "num_filter_entries": 1635, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100663, "oldest_key_time": 1765100663, "file_creation_time": 1765100806, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 88456 microseconds, and 12598 cpu microseconds.
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.342321) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 6694892 bytes OK
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.342544) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.344678) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.344739) EVENT_LOG_v1 {"time_micros": 1765100806344725, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.344772) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 10453677, prev total WAL file size 10453677, number of live WAL files 2.
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.347276) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(6537KB)], [18(11MB)]
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100806347316, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18620621, "oldest_snapshot_seqno": -1}
Dec 07 09:46:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:46.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4141 keys, 14108874 bytes, temperature: kUnknown
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100806474804, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 14108874, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14075671, "index_size": 21765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10373, "raw_key_size": 105704, "raw_average_key_size": 25, "raw_value_size": 13994320, "raw_average_value_size": 3379, "num_data_blocks": 935, "num_entries": 4141, "num_filter_entries": 4141, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765100806, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.475760) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14108874 bytes
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.479059) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.6 rd, 110.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.4, 11.4 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(4.9) write-amplify(2.1) OK, records in: 4679, records dropped: 538 output_compression: NoCompression
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.479106) EVENT_LOG_v1 {"time_micros": 1765100806479085, "job": 8, "event": "compaction_finished", "compaction_time_micros": 127897, "compaction_time_cpu_micros": 31486, "output_level": 6, "num_output_files": 1, "total_output_size": 14108874, "num_input_records": 4679, "num_output_records": 4141, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100806482429, "job": 8, "event": "table_file_deletion", "file_number": 20}
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100806488208, "job": 8, "event": "table_file_deletion", "file_number": 18}
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.347181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.488548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.488564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.488570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.488574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:46:46 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:46:46.488579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:46:46 compute-1 sshd-session[89003]: Connection closed by 192.168.122.30 port 43842
Dec 07 09:46:46 compute-1 sshd-session[89000]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:46:46 compute-1 systemd[1]: session-38.scope: Deactivated successfully.
Dec 07 09:46:46 compute-1 systemd-logind[796]: Session 38 logged out. Waiting for processes to exit.
Dec 07 09:46:46 compute-1 systemd[1]: session-38.scope: Consumed 1min 7.725s CPU time.
Dec 07 09:46:46 compute-1 systemd-logind[796]: Removed session 38.
Dec 07 09:46:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:46 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:46:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 07 09:46:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:46:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Dec 07 09:46:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:47 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:47.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:47 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:48 compute-1 ceph-mon[80077]: pgmap v120: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Dec 07 09:46:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:46:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:46:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 07 09:46:48 compute-1 ceph-mon[80077]: osdmap e140: 3 total, 3 up, 3 in
Dec 07 09:46:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:46:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:46:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:46:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:46:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:46:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:48.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:48 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:49 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 07 09:46:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Dec 07 09:46:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:49 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:49.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:49 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004270 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:50 compute-1 ceph-mon[80077]: pgmap v122: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 07 09:46:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 07 09:46:50 compute-1 ceph-mon[80077]: osdmap e141: 3 total, 3 up, 3 in
Dec 07 09:46:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:50.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:46:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:50 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f50003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:51 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 07 09:46:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Dec 07 09:46:51 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 142 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=80/81 n=5 ec=57/42 lis/c=80/80 les/c/f=81/81/0 sis=142 pruub=10.963578224s) [2] r=-1 lpr=142 pi=[80,142)/1 crt=56'1095 mlcod 0'0 active pruub 378.025390625s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:46:51 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 142 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=80/81 n=5 ec=57/42 lis/c=80/80 les/c/f=81/81/0 sis=142 pruub=10.963443756s) [2] r=-1 lpr=142 pi=[80,142)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 378.025390625s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:46:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:51 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:51 compute-1 sudo[95469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:46:51 compute-1 sudo[95469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:46:51 compute-1 sudo[95469]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:51 compute-1 sshd-session[95472]: Accepted publickey for zuul from 192.168.122.30 port 39540 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:46:51 compute-1 systemd-logind[796]: New session 39 of user zuul.
Dec 07 09:46:51 compute-1 systemd[1]: Started Session 39 of User zuul.
Dec 07 09:46:51 compute-1 sshd-session[95472]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:46:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:51.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:51 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f74004850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:52 compute-1 ceph-mon[80077]: pgmap v124: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 474 B/s rd, 0 op/s; 17 B/s, 0 objects/s recovering
Dec 07 09:46:52 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 07 09:46:52 compute-1 ceph-mon[80077]: osdmap e142: 3 total, 3 up, 3 in
Dec 07 09:46:52 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:46:52 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:46:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Dec 07 09:46:52 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 143 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=80/81 n=5 ec=57/42 lis/c=80/80 les/c/f=81/81/0 sis=143) [2]/[1] r=0 lpr=143 pi=[80,143)/1 crt=56'1095 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:46:52 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 143 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=80/81 n=5 ec=57/42 lis/c=80/80 les/c/f=81/81/0 sis=143) [2]/[1] r=0 lpr=143 pi=[80,143)/1 crt=56'1095 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 07 09:46:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:46:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:52.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:46:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:52 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:52 compute-1 python3.9[95648]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:46:53 compute-1 ceph-mon[80077]: osdmap e143: 3 total, 3 up, 3 in
Dec 07 09:46:53 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 07 09:46:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Dec 07 09:46:53 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 144 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=108/108 les/c/f=109/109/0 sis=144) [1] r=0 lpr=144 pi=[108,144)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:46:53 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 144 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=143/144 n=5 ec=57/42 lis/c=80/80 les/c/f=81/81/0 sis=143) [2]/[1] async=[2] r=0 lpr=143 pi=[80,143)/1 crt=56'1095 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:46:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:53 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:53.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:53 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:54 compute-1 ceph-mon[80077]: pgmap v127: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 587 B/s rd, 0 op/s
Dec 07 09:46:54 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 07 09:46:54 compute-1 ceph-mon[80077]: osdmap e144: 3 total, 3 up, 3 in
Dec 07 09:46:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Dec 07 09:46:54 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 145 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=108/108 les/c/f=109/109/0 sis=145) [1]/[2] r=-1 lpr=145 pi=[108,145)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:46:54 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 145 pg[10.1f( empty local-lis/les=0/0 n=0 ec=57/42 lis/c=108/108 les/c/f=109/109/0 sis=145) [1]/[2] r=-1 lpr=145 pi=[108,145)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 07 09:46:54 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 145 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=143/144 n=5 ec=57/42 lis/c=143/80 les/c/f=144/81/0 sis=145 pruub=14.981105804s) [2] async=[2] r=-1 lpr=145 pi=[80,145)/1 crt=56'1095 mlcod 56'1095 active pruub 385.117523193s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:46:54 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 145 pg[10.1e( v 56'1095 (0'0,56'1095] local-lis/les=143/144 n=5 ec=57/42 lis/c=143/80 les/c/f=144/81/0 sis=145 pruub=14.981057167s) [2] r=-1 lpr=145 pi=[80,145)/1 crt=56'1095 mlcod 0'0 unknown NOTIFY pruub 385.117523193s@ mbc={}] state<Start>: transitioning to Stray
Dec 07 09:46:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:54.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:54 compute-1 sudo[95804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upxkfavyaaqpaohlguhuphdvjqgjundb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100814.3069038-69-72814927976128/AnsiballZ_getent.py'
Dec 07 09:46:54 compute-1 sudo[95804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:54 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:54 compute-1 python3.9[95806]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 07 09:46:54 compute-1 sudo[95804]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:55 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:55 compute-1 sudo[95957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxmrixeowdmeunsnyvnxinxzirwwwlfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100815.297617-105-260295960282020/AnsiballZ_setup.py'
Dec 07 09:46:55 compute-1 sudo[95957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:46:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:55.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:55 compute-1 python3.9[95959]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:46:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:55 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:56 compute-1 sudo[95967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:46:56 compute-1 sudo[95967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:46:56 compute-1 sudo[95967]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:56 compute-1 sudo[95957]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:56.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:56 compute-1 sudo[96067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wftliosfxzbgynjbtlllvgnjodafjjgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100815.297617-105-260295960282020/AnsiballZ_dnf.py'
Dec 07 09:46:56 compute-1 sudo[96067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Dec 07 09:46:56 compute-1 ceph-mon[80077]: osdmap e145: 3 total, 3 up, 3 in
Dec 07 09:46:56 compute-1 python3.9[96069]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 07 09:46:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:56 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:57 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:57 compute-1 ceph-mon[80077]: pgmap v130: 337 pgs: 1 unknown, 1 active+remapped, 335 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Dec 07 09:46:57 compute-1 ceph-mon[80077]: pgmap v131: 337 pgs: 1 unknown, 1 active+remapped, 335 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 07 09:46:57 compute-1 ceph-mon[80077]: osdmap e146: 3 total, 3 up, 3 in
Dec 07 09:46:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:46:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Dec 07 09:46:57 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 147 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=145/108 les/c/f=146/109/0 sis=147) [1] r=0 lpr=147 pi=[108,147)/1 luod=0'0 crt=56'1095 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 07 09:46:57 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 147 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=0/0 n=5 ec=57/42 lis/c=145/108 les/c/f=146/109/0 sis=147) [1] r=0 lpr=147 pi=[108,147)/1 crt=56'1095 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 07 09:46:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:57.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:57 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:58 compute-1 sudo[96067]: pam_unix(sudo:session): session closed for user root
Dec 07 09:46:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:46:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:46:58.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:46:58 compute-1 ceph-mon[80077]: osdmap e147: 3 total, 3 up, 3 in
Dec 07 09:46:58 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Dec 07 09:46:58 compute-1 ceph-osd[77581]: osd.1 pg_epoch: 148 pg[10.1f( v 56'1095 (0'0,56'1095] local-lis/les=147/148 n=5 ec=57/42 lis/c=145/108 les/c/f=146/109/0 sis=147) [1] r=0 lpr=147 pi=[108,147)/1 crt=56'1095 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 07 09:46:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:58 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:58 compute-1 sudo[96221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzxejieneuqrohpanfzqmpbgwfsvupwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100818.5955472-147-132618117609576/AnsiballZ_dnf.py'
Dec 07 09:46:58 compute-1 sudo[96221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:46:59 compute-1 python3.9[96223]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:46:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:59 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f88001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:46:59 compute-1 ceph-mon[80077]: pgmap v134: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 07 09:46:59 compute-1 ceph-mon[80077]: osdmap e148: 3 total, 3 up, 3 in
Dec 07 09:46:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:46:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:46:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:46:59.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:46:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:46:59 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:00.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:00 compute-1 sudo[96221]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:47:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:00 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:01 compute-1 sudo[96375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxcwlvplexnzlzaucgerjnvtjgvuwpjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100820.7088313-171-84663415160424/AnsiballZ_systemd.py'
Dec 07 09:47:01 compute-1 sudo[96375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:01 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:01 compute-1 python3.9[96377]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 07 09:47:01 compute-1 ceph-mon[80077]: pgmap v136: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 07 09:47:01 compute-1 sudo[96375]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:01.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:01 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800ab20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:02 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:03 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:03 compute-1 ceph-mon[80077]: pgmap v137: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 07 09:47:03 compute-1 python3.9[96531]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:47:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:03.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:03 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:04.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:04 compute-1 sudo[96682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uicxtgioulipgruobqbthhyjtpikmxxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100824.0251834-225-37001845300563/AnsiballZ_sefcontext.py'
Dec 07 09:47:04 compute-1 sudo[96682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:04 compute-1 python3.9[96684]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 07 09:47:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:04 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800ab20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:05 compute-1 sudo[96682]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:05 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:47:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:05.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:05 compute-1 python3.9[96834]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:47:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:05 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:06 compute-1 ceph-mon[80077]: pgmap v138: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 393 B/s rd, 0 op/s; 14 B/s, 0 objects/s recovering
Dec 07 09:47:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:06.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:06 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:06 compute-1 sudo[96991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afkzddftilcvzxbbytspxqolpaoirqso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100826.651455-279-271655646453633/AnsiballZ_dnf.py'
Dec 07 09:47:06 compute-1 sudo[96991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:07 compute-1 python3.9[96993]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:47:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:07 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800ab20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:07.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:07 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:08 compute-1 ceph-mon[80077]: pgmap v139: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 350 B/s rd, 0 op/s; 12 B/s, 0 objects/s recovering
Dec 07 09:47:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:08.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:08 compute-1 sudo[96991]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:08 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:09 compute-1 sudo[97145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbyoekuotxoponggvclkbivghpvkfsrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100828.7662933-303-2884215589874/AnsiballZ_command.py'
Dec 07 09:47:09 compute-1 sudo[97145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:09 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:09 compute-1 python3.9[97147]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:47:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:09.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:09 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800ab20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:10 compute-1 ceph-mon[80077]: pgmap v140: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Dec 07 09:47:10 compute-1 sudo[97145]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:10.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:47:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:10 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f80002340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:11 compute-1 sudo[97433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxmemqgsljgcgxymbhzyjgktsetpukzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100830.502962-327-199652669654747/AnsiballZ_file.py'
Dec 07 09:47:11 compute-1 sudo[97433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:11 compute-1 python3.9[97435]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 07 09:47:11 compute-1 sudo[97433]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:11 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:11.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:11 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004430 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:12 compute-1 python3.9[97585]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:47:12 compute-1 ceph-mon[80077]: pgmap v141: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 523 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 07 09:47:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:12.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:12 compute-1 sudo[97738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzroxbgwplgjlhcivrureiahoibxvruf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100832.3163657-375-131503246965188/AnsiballZ_dnf.py'
Dec 07 09:47:12 compute-1 sudo[97738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:12 compute-1 python3.9[97740]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:47:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:12 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800ab20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:47:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:13 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800ab20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:13.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:13 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:14 compute-1 ceph-mon[80077]: pgmap v142: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:47:14 compute-1 sudo[97738]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:14.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:14 compute-1 sudo[97892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyolqflpuozsawefptuykukwtxuiticn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100834.4361649-402-182244806111039/AnsiballZ_dnf.py'
Dec 07 09:47:14 compute-1 sudo[97892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:14 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004450 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:15 compute-1 python3.9[97894]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:47:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:15 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800ab20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:47:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:15.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:15 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800ab20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:16 compute-1 ceph-mon[80077]: pgmap v143: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:47:16 compute-1 sudo[97897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:47:16 compute-1 sudo[97897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:47:16 compute-1 sudo[97897]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:16.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:16 compute-1 sudo[97892]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:16 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:17 compute-1 sudo[98071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bylixwjibrbbknyvalckdkijxvnorspc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100836.9256167-438-90732081412637/AnsiballZ_stat.py'
Dec 07 09:47:17 compute-1 sudo[98071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:17 compute-1 ceph-mon[80077]: pgmap v144: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:47:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:17 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004470 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:17 compute-1 python3.9[98073]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:47:17 compute-1 sshd-session[71310]: Received disconnect from 38.102.83.80 port 46360:11: disconnected by user
Dec 07 09:47:17 compute-1 sshd-session[71310]: Disconnected from user zuul 38.102.83.80 port 46360
Dec 07 09:47:17 compute-1 sudo[98071]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:17 compute-1 sshd-session[71307]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:47:17 compute-1 systemd[1]: session-19.scope: Deactivated successfully.
Dec 07 09:47:17 compute-1 systemd[1]: session-19.scope: Consumed 8.161s CPU time.
Dec 07 09:47:17 compute-1 systemd-logind[796]: Session 19 logged out. Waiting for processes to exit.
Dec 07 09:47:17 compute-1 systemd-logind[796]: Removed session 19.
Dec 07 09:47:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:17.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:17 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800ab20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:18 compute-1 sudo[98226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kssyybslslamyzapqpmxilfejrogbzer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100837.684288-462-228582910242431/AnsiballZ_slurp.py'
Dec 07 09:47:18 compute-1 sudo[98226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:18 compute-1 python3.9[98228]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 07 09:47:18 compute-1 sudo[98226]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:18.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:18 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800ab20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:19 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f58003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:19 compute-1 ceph-mon[80077]: pgmap v145: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:47:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:19.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:19 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f5c004490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:20.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:20 compute-1 sshd-session[95497]: Connection closed by 192.168.122.30 port 39540
Dec 07 09:47:20 compute-1 sshd-session[95472]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:47:20 compute-1 systemd[1]: session-39.scope: Deactivated successfully.
Dec 07 09:47:20 compute-1 systemd[1]: session-39.scope: Consumed 19.615s CPU time.
Dec 07 09:47:20 compute-1 systemd-logind[796]: Session 39 logged out. Waiting for processes to exit.
Dec 07 09:47:20 compute-1 systemd-logind[796]: Removed session 39.
Dec 07 09:47:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:47:20 compute-1 kernel: ganesha.nfsd[95649]: segfault at 50 ip 00007f2035c5832e sp 00007f1ffbffe210 error 4 in libntirpc.so.5.8[7f2035c3d000+2c000] likely on CPU 2 (core 0, socket 2)
Dec 07 09:47:20 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 09:47:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[86141]: 07/12/2025 09:47:20 : epoch 69354c38 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1f8800ab20 fd 48 proxy ignored for local
Dec 07 09:47:20 compute-1 systemd[1]: Created slice Slice /system/systemd-coredump.
Dec 07 09:47:20 compute-1 systemd[1]: Started Process Core Dump (PID 98254/UID 0).
Dec 07 09:47:21 compute-1 ceph-mon[80077]: pgmap v146: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:47:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:21.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:22 compute-1 systemd-coredump[98255]: Process 86145 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 69:
                                                   #0  0x00007f2035c5832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Dec 07 09:47:22 compute-1 systemd[1]: systemd-coredump@0-98254-0.service: Deactivated successfully.
Dec 07 09:47:22 compute-1 systemd[1]: systemd-coredump@0-98254-0.service: Consumed 1.055s CPU time.
Dec 07 09:47:22 compute-1 podman[98261]: 2025-12-07 09:47:22.182936946 +0000 UTC m=+0.027253665 container died 88b1575e7bf6a1c4a6a2738ad8a5b427833ca1d8abecd12798715ede0a232df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec 07 09:47:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:22.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-f248a49ab6387e5ffb2c2e007ebe14a92733484d6962af1d4eb2531219a2658c-merged.mount: Deactivated successfully.
Dec 07 09:47:22 compute-1 podman[98261]: 2025-12-07 09:47:22.614492581 +0000 UTC m=+0.458809300 container remove 88b1575e7bf6a1c4a6a2738ad8a5b427833ca1d8abecd12798715ede0a232df4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 07 09:47:22 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 09:47:22 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 09:47:22 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 2.057s CPU time.
Dec 07 09:47:23 compute-1 ceph-mon[80077]: pgmap v147: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:47:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:23.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:24.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:47:25 compute-1 sshd-session[98306]: Accepted publickey for zuul from 192.168.122.30 port 37878 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:47:25 compute-1 systemd-logind[796]: New session 40 of user zuul.
Dec 07 09:47:25 compute-1 systemd[1]: Started Session 40 of User zuul.
Dec 07 09:47:25 compute-1 sshd-session[98306]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:47:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:25.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:26 compute-1 ceph-mon[80077]: pgmap v148: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:47:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:26.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/094726 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:47:26 compute-1 python3.9[98460]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:47:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:27.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:27 compute-1 python3.9[98614]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:47:28 compute-1 ceph-mon[80077]: pgmap v149: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:47:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:47:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:28.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:29 compute-1 ceph-mon[80077]: pgmap v150: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:47:29 compute-1 python3.9[98808]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:47:29 compute-1 sshd-session[98309]: Connection closed by 192.168.122.30 port 37878
Dec 07 09:47:29 compute-1 sshd-session[98306]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:47:29 compute-1 systemd-logind[796]: Session 40 logged out. Waiting for processes to exit.
Dec 07 09:47:29 compute-1 systemd[1]: session-40.scope: Deactivated successfully.
Dec 07 09:47:29 compute-1 systemd[1]: session-40.scope: Consumed 2.481s CPU time.
Dec 07 09:47:29 compute-1 systemd-logind[796]: Removed session 40.
Dec 07 09:47:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:29.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:30.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:47:31 compute-1 sshd-session[98835]: Connection closed by 104.248.193.130 port 43490
Dec 07 09:47:31 compute-1 ceph-mon[80077]: pgmap v151: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:47:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:31.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:32.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/094732 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:47:32 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 1.
Dec 07 09:47:32 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:47:32 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 2.057s CPU time.
Dec 07 09:47:32 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:47:33 compute-1 podman[98885]: 2025-12-07 09:47:33.300880474 +0000 UTC m=+0.070199788 container create b20fbabdf5d86f1705daf3a3804c9c9ec4924e08e9ec968d8a9e839e86081a83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 09:47:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f4b3f80c333a59efc994b43badc3d571eb467c4b8f45d7807b8e8067db1d7a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 09:47:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f4b3f80c333a59efc994b43badc3d571eb467c4b8f45d7807b8e8067db1d7a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:47:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f4b3f80c333a59efc994b43badc3d571eb467c4b8f45d7807b8e8067db1d7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:47:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f4b3f80c333a59efc994b43badc3d571eb467c4b8f45d7807b8e8067db1d7a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:47:33 compute-1 podman[98885]: 2025-12-07 09:47:33.259432511 +0000 UTC m=+0.028751845 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:47:33 compute-1 podman[98885]: 2025-12-07 09:47:33.379679585 +0000 UTC m=+0.148998929 container init b20fbabdf5d86f1705daf3a3804c9c9ec4924e08e9ec968d8a9e839e86081a83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:47:33 compute-1 podman[98885]: 2025-12-07 09:47:33.385690599 +0000 UTC m=+0.155009913 container start b20fbabdf5d86f1705daf3a3804c9c9ec4924e08e9ec968d8a9e839e86081a83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 07 09:47:33 compute-1 bash[98885]: b20fbabdf5d86f1705daf3a3804c9c9ec4924e08e9ec968d8a9e839e86081a83
Dec 07 09:47:33 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:47:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:33 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 09:47:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:33 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 09:47:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:33 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 09:47:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:33 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 09:47:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:33 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 09:47:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:33 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 09:47:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:33 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 09:47:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:33 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:47:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:33.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:34 compute-1 ceph-mon[80077]: pgmap v152: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:47:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:34.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:35 compute-1 sshd-session[98943]: Accepted publickey for zuul from 192.168.122.30 port 34226 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:47:35 compute-1 systemd-logind[796]: New session 41 of user zuul.
Dec 07 09:47:35 compute-1 systemd[1]: Started Session 41 of User zuul.
Dec 07 09:47:35 compute-1 sshd-session[98943]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:47:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:47:35 compute-1 ceph-mon[80077]: pgmap v153: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:47:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:35.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:36 compute-1 python3.9[99096]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:47:36 compute-1 sudo[99102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:47:36 compute-1 sudo[99102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:47:36 compute-1 sudo[99102]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:36.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:37 compute-1 python3.9[99276]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:47:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:37.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:38 compute-1 sudo[99430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmmwhyvarwsliitucpjemppgkssfoyfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100857.742237-81-148560185476662/AnsiballZ_setup.py'
Dec 07 09:47:38 compute-1 sudo[99430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:38 compute-1 python3.9[99433]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:47:38 compute-1 ceph-mon[80077]: pgmap v154: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:47:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:38.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:38 compute-1 sudo[99430]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:38 compute-1 sudo[99515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhvlaocfeeimnvbyjtmyyvuxkgpgzyij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100857.742237-81-148560185476662/AnsiballZ_dnf.py'
Dec 07 09:47:38 compute-1 sudo[99515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:39 compute-1 python3.9[99517]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:47:39 compute-1 ceph-mon[80077]: pgmap v155: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 170 B/s wr, 0 op/s
Dec 07 09:47:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:39.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:40 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:47:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:40 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:47:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:40.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:40 compute-1 sudo[99515]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:47:41 compute-1 sudo[99669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyfvyxopjaxbjsjwtbqpadoznxkrkjqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100860.8269837-117-114431634324872/AnsiballZ_setup.py'
Dec 07 09:47:41 compute-1 sudo[99669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:41 compute-1 python3.9[99671]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:47:41 compute-1 ceph-mon[80077]: pgmap v156: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 426 B/s wr, 1 op/s
Dec 07 09:47:41 compute-1 sudo[99669]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:41.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:42 compute-1 sudo[99865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcdytncexrxrddolkhwsougxstivlbdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100862.1199067-150-10817648743783/AnsiballZ_file.py'
Dec 07 09:47:42 compute-1 sudo[99865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:47:42 compute-1 python3.9[99867]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:47:42 compute-1 sudo[99865]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:43 compute-1 sudo[100017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzoscmgflbatfbynmmqbjlpmutarpcnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100862.9551914-174-255111814924566/AnsiballZ_command.py'
Dec 07 09:47:43 compute-1 sudo[100017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:43 compute-1 python3.9[100019]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:47:43 compute-1 sudo[100017]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:43.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:44.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:44 compute-1 sudo[100183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzrmkujxmtlhiebmwupvdbybngdibtnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100864.0606148-198-176248149911049/AnsiballZ_stat.py'
Dec 07 09:47:44 compute-1 sudo[100183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:44 compute-1 ceph-mon[80077]: pgmap v157: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 426 B/s wr, 1 op/s
Dec 07 09:47:44 compute-1 python3.9[100185]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:47:44 compute-1 sudo[100183]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:45 compute-1 sudo[100261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-regbtztwiyrqrosvoohmapwrfivbjgdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100864.0606148-198-176248149911049/AnsiballZ_file.py'
Dec 07 09:47:45 compute-1 sudo[100261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:45 compute-1 python3.9[100263]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:47:45 compute-1 sudo[100261]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:47:45 compute-1 ceph-mon[80077]: pgmap v158: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec 07 09:47:45 compute-1 sudo[100413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgcayjohrjbcwtzwdjmzocrxjpuvzrfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100865.5619245-234-280372572529614/AnsiballZ_stat.py'
Dec 07 09:47:45 compute-1 sudo[100413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:45.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:46 compute-1 python3.9[100415]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:47:46 compute-1 sudo[100413]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:46.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:46 compute-1 sudo[100492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isflcmsncdeicbdapqsyepjvthmnjafc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100865.5619245-234-280372572529614/AnsiballZ_file.py'
Dec 07 09:47:46 compute-1 sudo[100492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:46 compute-1 python3.9[100494]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:47:46 compute-1 sudo[100492]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:47:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c04000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:47 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:47 compute-1 sudo[100659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqitjalpswcgzihjvueegkqkpjgesqop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100866.9993598-273-50601434772476/AnsiballZ_ini_file.py'
Dec 07 09:47:47 compute-1 sudo[100659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:47 compute-1 python3.9[100661]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:47:47 compute-1 sudo[100659]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:47.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:48 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4001230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/094748 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:47:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:49 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:49.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:49 compute-1 sudo[100812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uartqegsnawftqdpcvnnxnamtzgcebez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100867.8807728-273-11335768135963/AnsiballZ_ini_file.py'
Dec 07 09:47:49 compute-1 sudo[100812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:49 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c00001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:49 compute-1 python3.9[100814]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:47:49 compute-1 sudo[100812]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:49.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:50 compute-1 sudo[100964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlrxssqrvsysxshagfjtvkamxjwavcrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100869.6300323-273-253582252326562/AnsiballZ_ini_file.py'
Dec 07 09:47:50 compute-1 sudo[100964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:50 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:50 compute-1 ceph-mon[80077]: pgmap v159: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:47:50 compute-1 python3.9[100966]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:47:50 compute-1 sudo[100964]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:50 compute-1 sudo[101117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxahbpwxyrxkocplrufajnrsexnuoenf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100870.4209216-273-10256427272547/AnsiballZ_ini_file.py'
Dec 07 09:47:50 compute-1 sudo[101117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:47:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:50 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:50 compute-1 python3.9[101119]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:47:51 compute-1 sudo[101117]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:51 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:47:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:51 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:47:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:51.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:51 compute-1 ceph-mon[80077]: pgmap v160: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:47:51 compute-1 ceph-mon[80077]: pgmap v161: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Dec 07 09:47:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:51 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:51 compute-1 sudo[101219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:47:51 compute-1 sudo[101219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:47:51 compute-1 sudo[101219]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:51 compute-1 sudo[101244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:47:51 compute-1 sudo[101244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:47:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:51.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:52 compute-1 sudo[101331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmmhofamfjeiobihtrmfcbgmqwtxrtdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100871.3646638-366-257118113537543/AnsiballZ_dnf.py'
Dec 07 09:47:52 compute-1 sudo[101331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:52 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:52 compute-1 sudo[101244]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:52 compute-1 python3.9[101335]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:47:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:52 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:53.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:53 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4001d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:53.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:54 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:54 compute-1 sudo[101331]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:54 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:47:54 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:47:54 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:47:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:54 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:55.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:55 compute-1 sudo[101506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhjvywzombxbyromgzqtmgaatuzlqjze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100874.9234512-399-95973582497380/AnsiballZ_setup.py'
Dec 07 09:47:55 compute-1 sudo[101506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:55 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:55 compute-1 python3.9[101508]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:47:55 compute-1 sudo[101506]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:47:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:55.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:56 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:56 compute-1 sudo[101661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbtuxzozrgqtvxxiijgpuvcifilzasib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100875.8962698-423-69137552732828/AnsiballZ_stat.py'
Dec 07 09:47:56 compute-1 sudo[101661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:56 compute-1 sudo[101664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:47:56 compute-1 sudo[101664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:47:56 compute-1 sudo[101664]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:56 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4002df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/094757 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:47:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:57.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:57 compute-1 python3.9[101663]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:47:57 compute-1 sudo[101661]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:57 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:57 compute-1 sudo[101838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpwshcpinvsmrorscuaocfnrarwtudvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100877.6049194-450-259390862595047/AnsiballZ_stat.py'
Dec 07 09:47:57 compute-1 sudo[101838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:47:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:57.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:47:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:58 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:58 compute-1 python3.9[101840]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:47:58 compute-1 sudo[101838]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:58 compute-1 sudo[101991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weixhwvbhcffdpskszhktiwwznxmhchq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100878.530204-480-135200328002331/AnsiballZ_command.py'
Dec 07 09:47:58 compute-1 sudo[101991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:58 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:47:59.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:47:59 compute-1 python3.9[101993]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:47:59 compute-1 sudo[101991]: pam_unix(sudo:session): session closed for user root
Dec 07 09:47:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:47:59 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4002df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:47:59 compute-1 ceph-mon[80077]: pgmap v162: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec 07 09:47:59 compute-1 ceph-mon[80077]: pgmap v163: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec 07 09:47:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:47:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:47:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:47:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:47:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:47:59 compute-1 sudo[102144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbqwrjwapblsgjhjiknpwfjhuuiywdhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100879.4330616-510-204415521275175/AnsiballZ_service_facts.py'
Dec 07 09:47:59 compute-1 sudo[102144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:47:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:47:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:47:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:47:59.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:00 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:00 compute-1 python3.9[102146]: ansible-service_facts Invoked
Dec 07 09:48:00 compute-1 network[102164]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 07 09:48:00 compute-1 network[102165]: 'network-scripts' will be removed from distribution in near future.
Dec 07 09:48:00 compute-1 network[102166]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 07 09:48:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:00 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:48:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:01.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:01 compute-1 ceph-mon[80077]: pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 07 09:48:01 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:48:01 compute-1 ceph-mon[80077]: pgmap v165: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Dec 07 09:48:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:01 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:48:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:01.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:48:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:02 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4002df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:02 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:03.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:03 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:03 compute-1 sudo[102144]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:03.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:04 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:04 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:05.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:05 compute-1 sudo[102451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyuxvkmzrfcqsqdenxerwetdyoszvtdx ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765100884.7310991-555-101407609444127/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765100884.7310991-555-101407609444127/args'
Dec 07 09:48:05 compute-1 sudo[102451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:05 compute-1 sudo[102451]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:05 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:05 compute-1 ceph-mon[80077]: pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Dec 07 09:48:05 compute-1 sudo[102618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aohixhpdseofabicnlfptfldpcvzaohk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100885.579982-588-272883580591517/AnsiballZ_dnf.py'
Dec 07 09:48:05 compute-1 sudo[102618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:05.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:06 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:06 compute-1 python3.9[102620]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:48:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:06 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:07.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:48:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:07 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:48:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:07.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:08 compute-1 sudo[102618]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:08 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:08 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:48:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:09.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:09 compute-1 ceph-mon[80077]: pgmap v167: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 07 09:48:09 compute-1 ceph-mon[80077]: pgmap v168: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 07 09:48:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:09 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:09 compute-1 sudo[102773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcgrtpolzocwqezecsavmiziiuspakqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100888.914185-627-155278687049575/AnsiballZ_package_facts.py'
Dec 07 09:48:09 compute-1 sudo[102773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:09 compute-1 python3.9[102775]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 07 09:48:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:48:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:09.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:10 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:10 compute-1 sudo[102773]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:10 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:11 compute-1 ceph-mon[80077]: pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s
Dec 07 09:48:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:11.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:11 compute-1 sudo[102926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxjddfiybyrkjxempeawjgpoopdarvuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100890.8738005-658-44067485445470/AnsiballZ_stat.py'
Dec 07 09:48:11 compute-1 sudo[102926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:11 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:11 compute-1 python3.9[102928]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:48:11 compute-1 sudo[102926]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:11 compute-1 sudo[103004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trqbkklvlfllywchqlkzzelncmbbdpzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100890.8738005-658-44067485445470/AnsiballZ_file.py'
Dec 07 09:48:11 compute-1 sudo[103004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:11 compute-1 python3.9[103006]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:11 compute-1 sudo[103004]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:12.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:12 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:12 compute-1 ceph-mon[80077]: pgmap v170: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:48:12 compute-1 ceph-mon[80077]: pgmap v171: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:48:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:48:12 compute-1 sudo[103157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hystvgjjgjktavgngdjnhlkojhbxnoel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100892.1659472-693-6910350900273/AnsiballZ_stat.py'
Dec 07 09:48:12 compute-1 sudo[103157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:12 compute-1 python3.9[103159]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:48:12 compute-1 sudo[103157]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:12 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:13.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:13 compute-1 sudo[103235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqqwftlgqzuyoorjtacxyxepvqsesejl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100892.1659472-693-6910350900273/AnsiballZ_file.py'
Dec 07 09:48:13 compute-1 sudo[103235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:48:13 compute-1 python3.9[103237]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:13 compute-1 sudo[103235]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:13 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:48:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:14.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:48:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:14 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:14 compute-1 ceph-mon[80077]: pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:14 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:15 compute-1 sudo[103388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpfrulwljcqkgolweghpokcvsxacaqrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100894.6003957-748-32319286909821/AnsiballZ_lineinfile.py'
Dec 07 09:48:15 compute-1 sudo[103388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 07 09:48:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:15.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 07 09:48:15 compute-1 sudo[103391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:48:15 compute-1 sudo[103391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:48:15 compute-1 sudo[103391]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:15 compute-1 python3.9[103390]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:15 compute-1 sudo[103388]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:15 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:15 compute-1 systemd[83131]: Created slice User Background Tasks Slice.
Dec 07 09:48:15 compute-1 systemd[83131]: Starting Cleanup of User's Temporary Files and Directories...
Dec 07 09:48:15 compute-1 systemd[83131]: Finished Cleanup of User's Temporary Files and Directories.
Dec 07 09:48:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:16.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:16 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:16 compute-1 sudo[103517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:48:16 compute-1 sudo[103517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:48:16 compute-1 sudo[103517]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:16 compute-1 sudo[103592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeuleyrpzxhakcbkauokkccdrwfnbrux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100896.3042097-793-272281046448777/AnsiballZ_setup.py'
Dec 07 09:48:16 compute-1 sudo[103592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:16 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:16 compute-1 python3.9[103594]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:48:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:17.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:17 compute-1 sudo[103592]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:48:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:17 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0023f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:17 compute-1 sudo[103676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hogfotkegubaffhtyzoiufjuvoezqieq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100896.3042097-793-272281046448777/AnsiballZ_systemd.py'
Dec 07 09:48:17 compute-1 sudo[103676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:17 compute-1 ceph-mon[80077]: pgmap v173: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:17 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:48:17 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:48:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:18.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:18 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:18 compute-1 python3.9[103678]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:48:18 compute-1 sudo[103676]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:18 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:19.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:19 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:19 compute-1 sshd-session[98946]: Connection closed by 192.168.122.30 port 34226
Dec 07 09:48:19 compute-1 sshd-session[98943]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:48:19 compute-1 systemd[1]: session-41.scope: Deactivated successfully.
Dec 07 09:48:19 compute-1 systemd[1]: session-41.scope: Consumed 26.858s CPU time.
Dec 07 09:48:19 compute-1 systemd-logind[796]: Session 41 logged out. Waiting for processes to exit.
Dec 07 09:48:19 compute-1 systemd-logind[796]: Removed session 41.
Dec 07 09:48:19 compute-1 ceph-mon[80077]: pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:48:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:48:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:20.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:20 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:20 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd0000d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:48:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:21.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:21 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:22.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:22 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:22 compute-1 ceph-mon[80077]: pgmap v175: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:48:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:22 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:23.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:23 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:48:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:24.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:48:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:24 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:24 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:25 compute-1 ceph-mon[80077]: pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:48:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:25.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:25 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:25 compute-1 sshd-session[103712]: Accepted publickey for zuul from 192.168.122.30 port 35518 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:48:25 compute-1 systemd-logind[796]: New session 42 of user zuul.
Dec 07 09:48:26 compute-1 systemd[1]: Started Session 42 of User zuul.
Dec 07 09:48:26 compute-1 sshd-session[103712]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:48:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:26.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:26 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:26 compute-1 ceph-mon[80077]: pgmap v177: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:26 compute-1 ceph-mon[80077]: pgmap v178: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:26 compute-1 sudo[103866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkpiczesxaragzxvjtzdbcjxtvhekuvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100906.1312144-27-21725164135499/AnsiballZ_file.py'
Dec 07 09:48:26 compute-1 sudo[103866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:26 compute-1 python3.9[103868]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:26 compute-1 sudo[103866]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:26 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:27.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:48:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:27 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:27 compute-1 sudo[104018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnqiiaclqwgfdtaviokfqshhtlxkwiaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100907.1395738-63-273982174276213/AnsiballZ_stat.py'
Dec 07 09:48:27 compute-1 sudo[104018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:27 compute-1 python3.9[104020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:48:27 compute-1 sudo[104018]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:48:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:28.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:48:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:28 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:28 compute-1 sudo[104097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzouyizimyvpljewuhfseyyuoivfhlvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100907.1395738-63-273982174276213/AnsiballZ_file.py'
Dec 07 09:48:28 compute-1 sudo[104097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:28 compute-1 python3.9[104099]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:28 compute-1 sudo[104097]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:28 compute-1 sshd-session[103715]: Connection closed by 192.168.122.30 port 35518
Dec 07 09:48:28 compute-1 sshd-session[103712]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:48:28 compute-1 systemd-logind[796]: Session 42 logged out. Waiting for processes to exit.
Dec 07 09:48:28 compute-1 systemd[1]: session-42.scope: Deactivated successfully.
Dec 07 09:48:28 compute-1 systemd[1]: session-42.scope: Consumed 1.840s CPU time.
Dec 07 09:48:28 compute-1 systemd-logind[796]: Removed session 42.
Dec 07 09:48:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:28 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd00018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:29.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:29 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:30.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:30 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:30 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:48:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:31 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd0002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:32.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:32 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:48:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:32 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:33.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:33 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:34.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:34 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd0002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:34 compute-1 sshd-session[104127]: Accepted publickey for zuul from 192.168.122.30 port 35524 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:48:34 compute-1 systemd-logind[796]: New session 43 of user zuul.
Dec 07 09:48:34 compute-1 ceph-mds[85822]: mds.beacon.cephfs.compute-1.ihigcc missed beacon ack from the monitors
Dec 07 09:48:34 compute-1 systemd[1]: Started Session 43 of User zuul.
Dec 07 09:48:34 compute-1 sshd-session[104127]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:48:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:34 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:48:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:35.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:35 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd80032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:35 compute-1 python3.9[104280]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:48:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:36.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:36 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:36 compute-1 sudo[104311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:48:36 compute-1 sudo[104311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:48:36 compute-1 sudo[104311]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:36 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:37.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:48:37 compute-1 sudo[104461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjllqkgxbuagfpdfvusvaszmwftiyuzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100916.8906806-60-213074356709676/AnsiballZ_file.py'
Dec 07 09:48:37 compute-1 sudo[104461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:37 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:37 compute-1 python3.9[104463]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:37 compute-1 sudo[104461]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:38.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:38 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd80032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:38 compute-1 sudo[104637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpacxfyshsddcelniersvkfpejbsdfri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100917.8198407-84-238032076753197/AnsiballZ_stat.py'
Dec 07 09:48:38 compute-1 sudo[104637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:38 compute-1 ceph-mon[80077]: pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:48:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:48:38 compute-1 python3.9[104639]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:48:38 compute-1 sudo[104637]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:38 compute-1 sudo[104715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkdxmyujtyrpjqenlunjrwdtmzxbdxtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100917.8198407-84-238032076753197/AnsiballZ_file.py'
Dec 07 09:48:38 compute-1 sudo[104715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:38 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd80032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:39 compute-1 python3.9[104717]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.c87glq4n recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:39 compute-1 sudo[104715]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:48:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:39.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:39 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd80032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:39 compute-1 sudo[104867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffytnvnvkzwjncraatuxpkoxpbwdtidc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100919.6187384-144-110240009665262/AnsiballZ_stat.py'
Dec 07 09:48:39 compute-1 sudo[104867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:40.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:40 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4003ef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:40 compute-1 python3.9[104869]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:48:40 compute-1 sudo[104867]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:40 compute-1 sudo[104946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fipsgaroaksgfevylxebakzsvefsmzxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100919.6187384-144-110240009665262/AnsiballZ_file.py'
Dec 07 09:48:40 compute-1 sudo[104946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:40 compute-1 python3.9[104948]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.uqu1wscq recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:40 compute-1 sudo[104946]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:40 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd0003a50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:41.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:41 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:41 compute-1 sudo[105098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndrvlltwzjfdzzdmwajjcqvomnahemll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100921.0905423-183-270388658817942/AnsiballZ_file.py'
Dec 07 09:48:41 compute-1 sudo[105098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:41 compute-1 python3.9[105100]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:48:41 compute-1 sudo[105098]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:42.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:42 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:48:42 compute-1 sudo[105251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poagfrdosxvaazhkteejugemkqigexnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100922.4300199-207-206615065837414/AnsiballZ_stat.py'
Dec 07 09:48:42 compute-1 sudo[105251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:42 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc000bf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:43.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:43 compute-1 python3.9[105253]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:48:43 compute-1 sudo[105251]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:43 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd80032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:43 compute-1 sudo[105329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhgyptpjadeophbufyvpqgbhqedwbzsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100922.4300199-207-206615065837414/AnsiballZ_file.py'
Dec 07 09:48:43 compute-1 sudo[105329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:43 compute-1 python3.9[105331]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:48:43 compute-1 sudo[105329]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:48:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:44.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:44 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:44 compute-1 sudo[105482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqfvqqkfsdvrjpajczojwvskouuaixym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100924.011585-207-125783636186981/AnsiballZ_stat.py'
Dec 07 09:48:44 compute-1 sudo[105482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:44 compute-1 python3.9[105484]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:48:44 compute-1 sudo[105482]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:44 compute-1 sudo[105560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eihynettbikjgrcqjxmmmqhloacscnmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100924.011585-207-125783636186981/AnsiballZ_file.py'
Dec 07 09:48:44 compute-1 sudo[105560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:44 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:45 compute-1 ceph-mon[80077]: pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:45 compute-1 ceph-mon[80077]: pgmap v181: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:48:45 compute-1 ceph-mon[80077]: pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:45 compute-1 ceph-mon[80077]: pgmap v183: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:45 compute-1 ceph-mon[80077]: pgmap v184: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:48:45 compute-1 ceph-mon[80077]: pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:45 compute-1 python3.9[105562]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:48:45 compute-1 sudo[105560]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:45.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:45 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc001710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:45 compute-1 sudo[105712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdsuipbtffevronfrkglneqkquscdvhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100925.406468-276-163319893471191/AnsiballZ_file.py'
Dec 07 09:48:45 compute-1 sudo[105712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:48:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:46.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:46 compute-1 python3.9[105714]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:46 compute-1 sudo[105712]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:46 compute-1 sudo[105865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epfmuaotjyqpjuklaylbafdbjoojuswu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100926.657229-300-153862787114579/AnsiballZ_stat.py'
Dec 07 09:48:46 compute-1 sudo[105865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:47 compute-1 python3.9[105867]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:48:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:47.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:47 compute-1 sudo[105865]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:48:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:47 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:47 compute-1 sudo[105943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxxsirkhrscbhmuatozbamfmpnwwleur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100926.657229-300-153862787114579/AnsiballZ_file.py'
Dec 07 09:48:47 compute-1 sudo[105943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:47 compute-1 python3.9[105945]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:47 compute-1 sudo[105943]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:48.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:48 compute-1 ceph-mon[80077]: pgmap v186: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:48:48 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:48:48 compute-1 ceph-mon[80077]: pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:48 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc001710 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:48 compute-1 sudo[106096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crobndkyvoubuxzxioctwaiqyzzhvwgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100928.0584023-336-66354038223592/AnsiballZ_stat.py'
Dec 07 09:48:48 compute-1 sudo[106096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:48 compute-1 python3.9[106098]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:48:48 compute-1 sudo[106096]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:48 compute-1 sudo[106175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glptafzbcwvnbgroirhdzyuocuyqvkno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100928.0584023-336-66354038223592/AnsiballZ_file.py'
Dec 07 09:48:48 compute-1 sudo[106175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:48 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:49 compute-1 python3.9[106177]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:49 compute-1 sudo[106175]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:49.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:49 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:49 compute-1 sudo[106329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjmrabhtzdlcdqhndtrdiduuisirtdxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100929.3552897-372-218147053036398/AnsiballZ_systemd.py'
Dec 07 09:48:49 compute-1 sudo[106329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:50.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:50 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:50 compute-1 ceph-mon[80077]: pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:48:50 compute-1 ceph-mon[80077]: pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:48:50 compute-1 python3.9[106331]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:48:50 compute-1 systemd[1]: Reloading.
Dec 07 09:48:50 compute-1 systemd-sysv-generator[106361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:48:50 compute-1 systemd-rc-local-generator[106354]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:48:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:50 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:51.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:51 compute-1 sudo[106329]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:51 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:52.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:52 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:52 compute-1 sudo[106520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcaakzdxrlasmezfokxfoobymclzzvfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100931.8733983-396-271556302165141/AnsiballZ_stat.py'
Dec 07 09:48:52 compute-1 sudo[106520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:52 compute-1 python3.9[106522]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:48:52 compute-1 sudo[106520]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:48:52 compute-1 ceph-mon[80077]: pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:52 compute-1 sudo[106598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbdoqvftpncsbyioxwfcuhxbwbwqldug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100931.8733983-396-271556302165141/AnsiballZ_file.py'
Dec 07 09:48:52 compute-1 sudo[106598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:52 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:53 compute-1 python3.9[106600]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:53 compute-1 sudo[106598]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:53.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:53 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:53 compute-1 ceph-mon[80077]: pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:48:53 compute-1 ceph-mon[80077]: pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:54.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:54 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:54 compute-1 sudo[106751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwfbnpwoerpiiffmfyblzphxukbjpnag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100933.9498734-432-205720705933582/AnsiballZ_stat.py'
Dec 07 09:48:54 compute-1 sudo[106751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:54 compute-1 python3.9[106753]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:48:54 compute-1 sudo[106751]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:54 compute-1 sudo[106829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zepuqmalziwivzxgyaqdlgdyevtdpvjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100933.9498734-432-205720705933582/AnsiballZ_file.py'
Dec 07 09:48:54 compute-1 sudo[106829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:54 compute-1 python3.9[106831]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:48:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:54 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:54 compute-1 sudo[106829]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:48:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:55.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:55 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:55 compute-1 sudo[106981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aicvfrzsidkzhlictswvnyvwrywhfeor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100935.3064697-468-238137745561136/AnsiballZ_systemd.py'
Dec 07 09:48:55 compute-1 sudo[106981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:48:55 compute-1 python3.9[106983]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:48:55 compute-1 systemd[1]: Reloading.
Dec 07 09:48:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:56.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:56 compute-1 systemd-rc-local-generator[107014]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:48:56 compute-1 systemd-sysv-generator[107017]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:48:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:56 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec002240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:56 compute-1 ceph-mon[80077]: pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:48:56 compute-1 systemd[1]: Starting Create netns directory...
Dec 07 09:48:56 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 07 09:48:56 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 07 09:48:56 compute-1 systemd[1]: Finished Create netns directory.
Dec 07 09:48:56 compute-1 sudo[106981]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:56 compute-1 sudo[107052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:48:56 compute-1 sudo[107052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:48:56 compute-1 sudo[107052]: pam_unix(sudo:session): session closed for user root
Dec 07 09:48:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:56 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:48:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:57.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:48:57 compute-1 python3.9[107202]: ansible-ansible.builtin.service_facts Invoked
Dec 07 09:48:57 compute-1 network[107219]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 07 09:48:57 compute-1 network[107220]: 'network-scripts' will be removed from distribution in near future.
Dec 07 09:48:57 compute-1 network[107221]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 07 09:48:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:57 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:48:58.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:58 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:58 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:48:58 compute-1 ceph-mon[80077]: pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:48:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:58 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:48:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:48:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:48:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:48:59.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:48:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:48:59 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:00.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:00 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:49:00 compute-1 ceph-mon[80077]: pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:49:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:00 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:00 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:49:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:01.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:49:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:01 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:01 compute-1 ceph-mon[80077]: pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:49:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:02.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:02 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00030a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:02 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:03 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:49:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:03.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:03 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:03 compute-1 ceph-mon[80077]: pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:49:03 compute-1 sudo[107484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsxnaexuyzdqqvhzfvtdvawwfvkqqmng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100943.2514017-546-11169728443312/AnsiballZ_stat.py'
Dec 07 09:49:03 compute-1 sudo[107484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:03 compute-1 python3.9[107486]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:49:03 compute-1 sudo[107484]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:04 compute-1 sudo[107562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbyfdjlllgjzaxcttpncpcpkydfjbpev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100943.2514017-546-11169728443312/AnsiballZ_file.py'
Dec 07 09:49:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:04.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:04 compute-1 sudo[107562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:04 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:04 compute-1 python3.9[107565]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:04 compute-1 sudo[107562]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:04 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00030a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:04 compute-1 sudo[107715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncevbhohhwbezzgbrsupbjznxovvtdyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100944.6528468-585-22643177990427/AnsiballZ_file.py'
Dec 07 09:49:04 compute-1 sudo[107715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:05 compute-1 python3.9[107717]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:05.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:05 compute-1 sudo[107715]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:05 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:05 compute-1 sudo[107867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjxmuzybfwmioieeidmkhybltnhkzxqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100945.3900518-609-109740275804724/AnsiballZ_stat.py'
Dec 07 09:49:05 compute-1 sudo[107867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:05 compute-1 python3.9[107869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:49:05 compute-1 sudo[107867]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:06 compute-1 ceph-mon[80077]: pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:49:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:06.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:06 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:06 compute-1 sudo[107946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bghuuerbcukhnpqqilfatplcymxvqani ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100945.3900518-609-109740275804724/AnsiballZ_file.py'
Dec 07 09:49:06 compute-1 sudo[107946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:06 compute-1 python3.9[107948]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:06 compute-1 sudo[107946]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:06 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00030a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:07.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:07 compute-1 sudo[108098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgimrnqwelytjiwcvbevseuzlshkiiow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100946.9911115-654-197590355171701/AnsiballZ_timezone.py'
Dec 07 09:49:07 compute-1 sudo[108098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:07 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:07 compute-1 python3.9[108100]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 07 09:49:07 compute-1 systemd[1]: Starting Time & Date Service...
Dec 07 09:49:07 compute-1 systemd[1]: Started Time & Date Service.
Dec 07 09:49:07 compute-1 sudo[108098]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:08.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:08 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:08 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:49:08 compute-1 ceph-mon[80077]: pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:49:08 compute-1 sudo[108255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psqrdmvrgvkijwaenwynchasnsbwiihb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100948.2250829-681-76011806348621/AnsiballZ_file.py'
Dec 07 09:49:08 compute-1 sudo[108255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:08 compute-1 python3.9[108257]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:08 compute-1 sudo[108255]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:08 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:09.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:09 compute-1 sudo[108407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcfcrtgdwqfymaubmzoachjlnvhoyeql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100949.0646334-705-269699614034623/AnsiballZ_stat.py'
Dec 07 09:49:09 compute-1 sudo[108407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:09 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00030a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:09 compute-1 python3.9[108409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:49:09 compute-1 sudo[108407]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/094909 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:49:09 compute-1 sudo[108485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asfbkwntzostsprizzfmnojliamffccx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100949.0646334-705-269699614034623/AnsiballZ_file.py'
Dec 07 09:49:09 compute-1 sudo[108485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:10 compute-1 python3.9[108487]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:10.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:10 compute-1 sudo[108485]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:10 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:10 compute-1 ceph-mon[80077]: pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:49:10 compute-1 sudo[108638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niehoffusdafrdjjabnjwvugagudzacw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100950.375146-742-176546330638712/AnsiballZ_stat.py'
Dec 07 09:49:10 compute-1 sudo[108638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:10 compute-1 python3.9[108640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:49:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:10 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:10 compute-1 sudo[108638]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:11.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:11 compute-1 sudo[108716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elfqgzbumstwuuiymqhjeyqopthlyzve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100950.375146-742-176546330638712/AnsiballZ_file.py'
Dec 07 09:49:11 compute-1 sudo[108716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:11 compute-1 python3.9[108718]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.v3zuku7t recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:11 compute-1 sudo[108716]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:11 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:11 compute-1 sudo[108868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbaaagvuqvxjsyvanwosxvazfqbttxxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100951.6976523-778-277682103425684/AnsiballZ_stat.py'
Dec 07 09:49:11 compute-1 sudo[108868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:12.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:12 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00030a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:12 compute-1 ceph-mon[80077]: pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:49:12 compute-1 python3.9[108870]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:49:12 compute-1 sudo[108868]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:12 compute-1 sudo[108947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eweatvtemsxodwdkmfytjonaaqjepoio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100951.6976523-778-277682103425684/AnsiballZ_file.py'
Dec 07 09:49:12 compute-1 sudo[108947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:12 compute-1 python3.9[108949]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:12 compute-1 sudo[108947]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:12 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:13.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:13 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:49:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:13 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:13 compute-1 sudo[109099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsvdqwwjvzuhjvyaktaqkpspvssxizxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100953.0592005-816-240729227354572/AnsiballZ_command.py'
Dec 07 09:49:13 compute-1 sudo[109099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:49:13 compute-1 python3.9[109101]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:49:13 compute-1 sudo[109099]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:14.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:14 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:14 compute-1 sudo[109253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giqnkderkigdnobwhgdcihjrvaepmfde ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765100953.925867-840-62750150325219/AnsiballZ_edpm_nftables_from_files.py'
Dec 07 09:49:14 compute-1 sudo[109253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:14 compute-1 python3[109255]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 07 09:49:14 compute-1 sudo[109253]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:14 compute-1 ceph-mon[80077]: pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:49:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:14 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00030a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:15 compute-1 sudo[109405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjslujtdqjuyperhftsumfiezaivjikp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100954.872408-864-54879208268328/AnsiballZ_stat.py'
Dec 07 09:49:15 compute-1 sudo[109405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:15.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:15 compute-1 sudo[109408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:49:15 compute-1 sudo[109408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:49:15 compute-1 sudo[109408]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:15 compute-1 python3.9[109407]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:49:15 compute-1 sudo[109433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:49:15 compute-1 sudo[109433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:49:15 compute-1 sudo[109405]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:15 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0042a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:15 compute-1 sudo[109540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyauwcxwpoyhmzfjqlexwmspbquqalgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100954.872408-864-54879208268328/AnsiballZ_file.py'
Dec 07 09:49:15 compute-1 sudo[109540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:15 compute-1 python3.9[109547]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:15 compute-1 sudo[109540]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:15 compute-1 sudo[109433]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:16.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:16 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:16 compute-1 ceph-mon[80077]: pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:49:16 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:49:16 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:49:16 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:49:16 compute-1 sudo[109717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caiyrfoyxhepfwjltjhcqiwiodgfhglu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100956.1227024-901-1224083998205/AnsiballZ_stat.py'
Dec 07 09:49:16 compute-1 sudo[109717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:16 compute-1 python3.9[109719]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:49:16 compute-1 sudo[109717]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:16 compute-1 sudo[109745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:49:16 compute-1 sudo[109745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:49:16 compute-1 sudo[109745]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:16 compute-1 sudo[109820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyntetoxlnbqcntccvxgflwqxnrcsmny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100956.1227024-901-1224083998205/AnsiballZ_file.py'
Dec 07 09:49:16 compute-1 sudo[109820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:16 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:17 compute-1 python3.9[109822]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:17 compute-1 sudo[109820]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:17.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:17 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:17 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:49:17 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:49:17 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:49:17 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:49:17 compute-1 sudo[109972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pysqszxgeacdxfjgroqpxykffyxhsilw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100957.5177846-936-168874480194825/AnsiballZ_stat.py'
Dec 07 09:49:17 compute-1 sudo[109972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:18 compute-1 python3.9[109974]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:49:18 compute-1 sudo[109972]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:18.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:18 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0042a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:18 compute-1 sudo[110051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yexqbhyphryurxzalypcrrbsegecfcrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100957.5177846-936-168874480194825/AnsiballZ_file.py'
Dec 07 09:49:18 compute-1 sudo[110051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:18 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:49:18 compute-1 ceph-mon[80077]: pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:49:18 compute-1 python3.9[110053]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:18 compute-1 sudo[110051]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:18 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:19 compute-1 sudo[110203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idlrdqmgrjqnhnrzukejgnwsytlpqtow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100958.824912-972-169095871527910/AnsiballZ_stat.py'
Dec 07 09:49:19 compute-1 sudo[110203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:19.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:19 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:49:19 compute-1 python3.9[110205]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:49:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:19 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c000045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:19 compute-1 sudo[110203]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:19 compute-1 ceph-mon[80077]: pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:49:19 compute-1 sudo[110281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jglsracnvsgjvpgqzfalrgivulodgape ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100958.824912-972-169095871527910/AnsiballZ_file.py'
Dec 07 09:49:19 compute-1 sudo[110281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:19 compute-1 python3.9[110283]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:19 compute-1 sudo[110281]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:49:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:20.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:49:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:20 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:20 compute-1 sudo[110434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smpxxkpkixkedarvhqrvyuuyvluiavta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100960.184985-1008-138450731510095/AnsiballZ_stat.py'
Dec 07 09:49:20 compute-1 sudo[110434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:20 compute-1 python3.9[110436]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:49:20 compute-1 sudo[110434]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:20 compute-1 sudo[110512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysuzudilkhynjzvnplacmsrqcthfgoow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100960.184985-1008-138450731510095/AnsiballZ_file.py'
Dec 07 09:49:20 compute-1 sudo[110512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:20 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0042a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:21 compute-1 python3.9[110516]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:21 compute-1 sudo[110512]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:21 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:21 compute-1 ceph-mon[80077]: pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:49:21 compute-1 sudo[110666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxelgymnfgqeakmaolskgtnucqfhlvyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100961.6078944-1047-67038345453103/AnsiballZ_command.py'
Dec 07 09:49:21 compute-1 sudo[110666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:22.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:22 compute-1 python3.9[110668]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:49:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:22 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8001ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:22 compute-1 sudo[110666]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:22 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:49:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:22 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:49:22 compute-1 sudo[110822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqvoxodwepvhspecfbrgpfqmtcqoboqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100962.3834567-1071-105253135182388/AnsiballZ_blockinfile.py'
Dec 07 09:49:22 compute-1 sudo[110822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:22 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:23 compute-1 python3.9[110824]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:23 compute-1 sudo[110822]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:23.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:23 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:49:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:23 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0042a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:23 compute-1 sudo[110881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:49:23 compute-1 sudo[110881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:49:23 compute-1 sudo[110881]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:23 compute-1 sudo[110999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltmpolldtvlnofbxjsroxogysatmorpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100963.3814385-1098-78665273037105/AnsiballZ_file.py'
Dec 07 09:49:23 compute-1 sudo[110999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:23 compute-1 python3.9[111001]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:23 compute-1 sudo[110999]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:23 compute-1 ceph-mon[80077]: pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:49:23 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:49:23 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:49:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:24.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:24 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:24 compute-1 sudo[111152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxgipcslnvqeowknzsmtjaouhdqbxzof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100964.0367918-1098-184400155456081/AnsiballZ_file.py'
Dec 07 09:49:24 compute-1 sudo[111152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:24 compute-1 python3.9[111154]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:24 compute-1 sudo[111152]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:24 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8001ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:25.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:25 compute-1 sudo[111304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dosfspldzxpexzthunimfbhhdrenljcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100964.8739412-1143-114716073851537/AnsiballZ_mount.py'
Dec 07 09:49:25 compute-1 sudo[111304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:25 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:49:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:25 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:25 compute-1 python3.9[111306]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 07 09:49:25 compute-1 sudo[111304]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:26 compute-1 ceph-mon[80077]: pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 09:49:26 compute-1 sudo[111456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icytsqrqozojdymssqgccxgxdfykiosr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100965.7447166-1143-199631981766952/AnsiballZ_mount.py'
Dec 07 09:49:26 compute-1 sudo[111456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:26.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:26 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0042a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:26 compute-1 python3.9[111458]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 07 09:49:26 compute-1 sudo[111456]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:26 compute-1 sshd-session[104130]: Connection closed by 192.168.122.30 port 35524
Dec 07 09:49:26 compute-1 sshd-session[104127]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:49:26 compute-1 systemd[1]: session-43.scope: Deactivated successfully.
Dec 07 09:49:26 compute-1 systemd[1]: session-43.scope: Consumed 32.846s CPU time.
Dec 07 09:49:26 compute-1 systemd-logind[796]: Session 43 logged out. Waiting for processes to exit.
Dec 07 09:49:26 compute-1 systemd-logind[796]: Removed session 43.
Dec 07 09:49:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:26 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:49:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:27.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:49:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:27 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8001ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:28 compute-1 ceph-mon[80077]: pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 09:49:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:49:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:28 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:28.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:28 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:49:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:28 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:29.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:29 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:30 compute-1 ceph-mon[80077]: pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 09:49:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:30 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:30.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:30 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:31.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:31 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec0042a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/094931 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:49:32 compute-1 sshd-session[111487]: Accepted publickey for zuul from 192.168.122.30 port 50310 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:49:32 compute-1 systemd-logind[796]: New session 44 of user zuul.
Dec 07 09:49:32 compute-1 systemd[1]: Started Session 44 of User zuul.
Dec 07 09:49:32 compute-1 sshd-session[111487]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:49:32 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 09:49:32 compute-1 ceph-mon[80077]: pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:49:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:32 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:32.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:32 compute-1 sudo[111642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaskmyesixohehxvxkxrycjzcrbukqmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100972.1684997-19-254612994508485/AnsiballZ_tempfile.py'
Dec 07 09:49:32 compute-1 sudo[111642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:32 compute-1 python3.9[111644]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 07 09:49:32 compute-1 sudo[111642]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:32 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:33.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:33 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:49:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:33 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:33 compute-1 sudo[111794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfeydcgahwrlvjqccgkhkfpipxstsmhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100973.1807086-55-213867648474846/AnsiballZ_stat.py'
Dec 07 09:49:33 compute-1 sudo[111794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:33 compute-1 python3.9[111796]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:49:33 compute-1 sudo[111794]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:34 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:34.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:34 compute-1 ceph-mon[80077]: pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 07 09:49:34 compute-1 sudo[111950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhpcrpapbjkbomqovhmugbgoegyomukv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100974.067108-79-42905874827119/AnsiballZ_slurp.py'
Dec 07 09:49:34 compute-1 sudo[111950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:34 compute-1 python3.9[111952]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 07 09:49:34 compute-1 sudo[111950]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:34 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:35 compute-1 sudo[112102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmezzzvzwligsrwtfuqxtpwtprcslzkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100974.9229305-103-178780061839276/AnsiballZ_stat.py'
Dec 07 09:49:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:35.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:35 compute-1 sudo[112102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:35 compute-1 python3.9[112104]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.7s_4qzgp follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:49:35 compute-1 sudo[112102]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:35 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc000fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:35 compute-1 sudo[112227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sigzzsddbuysbhcjxpnjkkbmgqmfvqrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100974.9229305-103-178780061839276/AnsiballZ_copy.py'
Dec 07 09:49:35 compute-1 sudo[112227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:36 compute-1 python3.9[112229]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.7s_4qzgp mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765100974.9229305-103-178780061839276/.source.7s_4qzgp _original_basename=.zini2mx5 follow=False checksum=d33b7f67f47d03d6f4e754679ee7a83508aaa6d6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:36 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:36.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:36 compute-1 sudo[112227]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:36 compute-1 ceph-mon[80077]: pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 07 09:49:36 compute-1 sudo[112357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:49:36 compute-1 sudo[112403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktofwlhcyuqfnqxctlfqcknkumjxbowe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100976.3352482-148-63601554459620/AnsiballZ_setup.py'
Dec 07 09:49:36 compute-1 sudo[112357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:49:36 compute-1 sudo[112403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:36 compute-1 sudo[112357]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:36 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:37 compute-1 python3.9[112407]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:49:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:37.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:37 compute-1 sudo[112403]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:37 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:37 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 07 09:49:37 compute-1 sudo[112559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztrlodxfptqlrsqwmqzhcxxhbemzlvfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100977.5211687-173-108619000802955/AnsiballZ_blockinfile.py'
Dec 07 09:49:37 compute-1 sudo[112559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:38 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc0018e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:38.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:38 compute-1 python3.9[112561]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDztIgdvWfbGcTBsnJ/M+7HPF8fmQq/y+Bl35+zFajL3KlZAwT5Jrd0wBJFCENJp3TXe2vCz5X1q7WE7KkTCmfFoRuHmoqlZhTqT9s/+r8kiDatZiqCOWaKW4t/5FdXKBIVPlkry4+jUtXum7Hjaqx3CWAN9zTBaMGorSAA8LKMMvZPP0EYbAxaLgivTJ1mbZF0/ZNGo/5WQc2vAa9bAToTb0YwrajhjGwm8gpS1t7deqebzgprT7jWeXpxQZEVS/ynyQFICZ5W6covXVgsWgQNtfbmweGFQOMlP0vZE1/P3GUjWJgmaVsDrNDWdjCgiaRAZnNCC01eZyUjas+eot7B1Sg0BLS3JeORj3tIRcVI9DkuMQCdex5q/BCiz8YueUZn4qIiyvmG1max5Xui0X1LygXyNdyBWs5DbBGfPsFBLyXT1noEfYsgk5v0iu8DLl+PShKLO8xLqJMeYVYsUY8uG6qv+lA0YbVeiMomYLVXMABowwzcwzKHnlj5f+keT0=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICAU0KXuEPsaXKf0jGICVhewmjwEgAqPrkc4waZyQc7o
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBUF894VPJUzj6uHFODSSpNciOlDtn3PuhA44yhVzfkk/lOehkynDHVgBX6zwUYnOmiLJE7vHinKqWzoAVHhOas=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVimUYmVq1jwN4I5i4nI9XPpovC84bLnjioQY6MxnDdHWaEfuEub8qpNrfkTCFppybs82dXQEl9witk6tAj8GQQGfFN/IfI+GFHby5G2bWpOumixFRFVkhc3QW9inlnJNA0TMzwlbz5LOkL9/ShhCpshMnBGNjKJFaH5GvlqpWCYYAotq1zbwd6SRIu4O5cPa3+7mFmXKtlFl28oAFp3NMsNJ9wbIWhXeOcfUSNbrL52O30C6TKW8HiBC2kfg578bm0Pa6r2iMvPHhW7kMm5eQwUfB5l5JKgIsDJmaKjLej/4U7hO52yut7hfnV3O8qK0ZpD2xEwhe9OneH4tKueT63SehDENUIJWAasPiPrlHWkfm6PWhKwPMBu3Vuir/4R1SA6ZIJEzQeGq/nUuSBtbDZC4jDuXb8oywpR/uCaBgZbziPhqBMIegQDMvKeQGQmZn6V+eKkfv3I9Z83LbQRXEnIWiuf4XRp1btGZYv0+Q7zgiD+dw9QxCgWkdWxA9SoM=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEWDyTOT2SMCqj8YwhAvKshXrBfGOObG4cDM9r5B2FZj
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJuP9cUBko1m6+714/2inXnWXQqIN7Sx7/A0GBQAjM8bAkICVNXZtk9Pu38lY43gxHx3nZ57o3Dpp2ak8tsjrR4=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCip7MnZvuJx8DmLIGnIc8NcND4H8xH1hog1PQWG+WFQHEqpA3BpOSGhk8Mr5skxXappecIladNg73ReINM2gE58XsvsHhQICeXuRBK091YtVSafixD3fEvhD+xGUIukp3F6EPKU0x4WQ0xWQC38o13OyZtGRApI6AQEAxg0QMsB7qwwroH6ag7l7U4sv5nYqK3upInbblwL0LYfo6jyhHnhwZBVjv2MTJ8zZktF54SlM68fh8WQwQbA7VMqK6wEJlDRkdsIXPbq2PN6V08KJlBkBlvgXu5aTIeGQ5DdFuKQutnMEWlwiCtoJNly6Pv7PwjZnDKkPQP5RamELk/eKCRHXY5SbfmyG9VtAHHEV2f9NsjnFZRBx9ikx/H6/NpPmlMji5VbyfY1b0u0DreNZqm2bDWRcL++rsjZDfWqh2cJOF4Jan0m12bfjWDBXeGiunpl4XWydA0nbi0v4RHvH6pD2BoTuxC2rVSR233WC88Xe5HU1WoXegIy43ksMeFvGs=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYukxfCIA1Xurqi7GbVHfVTkzw++ujxQPgfwUA9AznN
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOT58aEV4d46XVVznwJYUJL8kuqtWeT85ng6XRArVPbONJirV0BPyfS1SwB7SxPwywavSEowgTdPM8QvrYiA0kE=
                                              create=True mode=0644 path=/tmp/ansible.7s_4qzgp state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:38 compute-1 sudo[112559]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:38 compute-1 ceph-mon[80077]: pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:49:38 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:49:38 compute-1 sudo[112712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxtzxjmxpjwusuhjqqroepfgrumbehna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100978.4432178-197-239435629718510/AnsiballZ_command.py'
Dec 07 09:49:38 compute-1 sudo[112712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:38 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:39 compute-1 python3.9[112714]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.7s_4qzgp' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:49:39 compute-1 sudo[112712]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:39.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:39 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:39 compute-1 sudo[112866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfrfcymdwjetrzlpdvrkdfygajupovzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100979.3255653-221-169622115753081/AnsiballZ_file.py'
Dec 07 09:49:39 compute-1 sudo[112866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:40 compute-1 ceph-mon[80077]: pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:49:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:40 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:40.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:40 compute-1 python3.9[112868]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.7s_4qzgp state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:40 compute-1 sudo[112866]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:40 compute-1 sshd-session[111491]: Connection closed by 192.168.122.30 port 50310
Dec 07 09:49:40 compute-1 sshd-session[111487]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:49:40 compute-1 systemd-logind[796]: Session 44 logged out. Waiting for processes to exit.
Dec 07 09:49:40 compute-1 systemd[1]: session-44.scope: Deactivated successfully.
Dec 07 09:49:40 compute-1 systemd[1]: session-44.scope: Consumed 5.357s CPU time.
Dec 07 09:49:40 compute-1 systemd-logind[796]: Removed session 44.
Dec 07 09:49:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:40 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc0018e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:41.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:41 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc0018e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/094941 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:49:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:42 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:42.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:42 compute-1 ceph-mon[80077]: pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:49:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:42 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:43.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:49:43 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:49:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:43 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc0018e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:44 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc0018e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:44.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:44 compute-1 ceph-mon[80077]: pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:49:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:44 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:45 compute-1 ceph-mon[80077]: pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:49:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:45 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:46 compute-1 sshd-session[112896]: Accepted publickey for zuul from 192.168.122.30 port 34842 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:49:46 compute-1 systemd-logind[796]: New session 45 of user zuul.
Dec 07 09:49:46 compute-1 systemd[1]: Started Session 45 of User zuul.
Dec 07 09:49:46 compute-1 sshd-session[112896]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:49:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:46.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004810 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:47 compute-1 python3.9[113050]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.123968) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100987124068, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1843, "num_deletes": 250, "total_data_size": 4827053, "memory_usage": 4918520, "flush_reason": "Manual Compaction"}
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100987141951, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 1961278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10808, "largest_seqno": 12646, "table_properties": {"data_size": 1955355, "index_size": 2998, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14774, "raw_average_key_size": 20, "raw_value_size": 1942546, "raw_average_value_size": 2686, "num_data_blocks": 132, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100806, "oldest_key_time": 1765100806, "file_creation_time": 1765100987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 18038 microseconds, and 6543 cpu microseconds.
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.142015) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 1961278 bytes OK
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.142046) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.144053) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.144136) EVENT_LOG_v1 {"time_micros": 1765100987144119, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.144175) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4818762, prev total WAL file size 4818762, number of live WAL files 2.
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.146866) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(1915KB)], [21(13MB)]
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100987146966, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16070152, "oldest_snapshot_seqno": -1}
Dec 07 09:49:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:47.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4418 keys, 14090814 bytes, temperature: kUnknown
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100987268582, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14090814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14057203, "index_size": 21469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 111976, "raw_average_key_size": 25, "raw_value_size": 13972417, "raw_average_value_size": 3162, "num_data_blocks": 920, "num_entries": 4418, "num_filter_entries": 4418, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765100987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.269025) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14090814 bytes
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.270566) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.9 rd, 115.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 13.5 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(15.4) write-amplify(7.2) OK, records in: 4864, records dropped: 446 output_compression: NoCompression
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.270599) EVENT_LOG_v1 {"time_micros": 1765100987270583, "job": 10, "event": "compaction_finished", "compaction_time_micros": 121807, "compaction_time_cpu_micros": 35529, "output_level": 6, "num_output_files": 1, "total_output_size": 14090814, "num_input_records": 4864, "num_output_records": 4418, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100987271342, "job": 10, "event": "table_file_deletion", "file_number": 23}
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765100987275687, "job": 10, "event": "table_file_deletion", "file_number": 21}
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.146677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.275825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.275835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.275838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.275840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:49:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:49:47.275842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:49:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:47 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:48 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:48.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:48 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:49:48 compute-1 ceph-mon[80077]: pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:49:48 compute-1 sudo[113206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-betkfhxlqhhftdgdcjcgxerlzofxgven ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100987.9217892-57-112124300084263/AnsiballZ_systemd.py'
Dec 07 09:49:48 compute-1 sudo[113206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:48 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:49 compute-1 python3.9[113208]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 07 09:49:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:49.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:49 compute-1 sudo[113206]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:49 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:49 compute-1 ceph-mon[80077]: pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:49:49 compute-1 sudo[113360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djvkfbybuavbudjcmptbqhijvqlzqeel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100989.4475634-81-57105212638159/AnsiballZ_systemd.py'
Dec 07 09:49:49 compute-1 sudo[113360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:50 compute-1 python3.9[113362]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:49:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:50 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:49:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:50 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:50 compute-1 sudo[113360]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:50.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:50 compute-1 sudo[113514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhiuysrpjbdxjrulxpxivovusknwqddx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100990.3765373-108-73660260282022/AnsiballZ_command.py'
Dec 07 09:49:50 compute-1 sudo[113514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:50 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:51 compute-1 python3.9[113516]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:49:51 compute-1 sudo[113514]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/094951 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:49:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:51.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:51 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:51 compute-1 ceph-mon[80077]: pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:49:51 compute-1 sudo[113667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygnursegtdyklgidrkykbfujinlfqybk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100991.2741153-132-105379403869649/AnsiballZ_stat.py'
Dec 07 09:49:51 compute-1 sudo[113667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:51 compute-1 python3.9[113669]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:49:51 compute-1 sudo[113667]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:52 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:52.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:52 compute-1 sudo[113820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mudfmotdxeaglicfxquogtiojpspgrhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100992.2378888-159-161962403798580/AnsiballZ_file.py'
Dec 07 09:49:52 compute-1 sudo[113820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:49:52 compute-1 python3.9[113822]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:49:52 compute-1 sudo[113820]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:52 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:53 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:49:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:53 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:49:53 compute-1 sshd-session[112900]: Connection closed by 192.168.122.30 port 34842
Dec 07 09:49:53 compute-1 sshd-session[112896]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:49:53 compute-1 systemd[1]: session-45.scope: Deactivated successfully.
Dec 07 09:49:53 compute-1 systemd[1]: session-45.scope: Consumed 4.285s CPU time.
Dec 07 09:49:53 compute-1 systemd-logind[796]: Session 45 logged out. Waiting for processes to exit.
Dec 07 09:49:53 compute-1 systemd-logind[796]: Removed session 45.
Dec 07 09:49:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:53.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:53 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc003580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:49:53 compute-1 ceph-mon[80077]: pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:49:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:54 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:54.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:54 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:55.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:55 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:55 compute-1 ceph-mon[80077]: pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:49:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:56 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc003580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:56.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:56 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:49:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:56 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:49:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:56 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:57 compute-1 sudo[113849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:49:57 compute-1 sudo[113849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:49:57 compute-1 sudo[113849]: pam_unix(sudo:session): session closed for user root
Dec 07 09:49:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:49:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:57.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:49:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:57 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:57 compute-1 ceph-mon[80077]: pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:49:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:49:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:58 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf40049b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:58 compute-1 sshd-session[113875]: Accepted publickey for zuul from 192.168.122.30 port 51698 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:49:58 compute-1 systemd-logind[796]: New session 46 of user zuul.
Dec 07 09:49:58 compute-1 systemd[1]: Started Session 46 of User zuul.
Dec 07 09:49:58 compute-1 sshd-session[113875]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:49:58 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:49:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:49:58.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:58 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:49:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:49:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:49:59.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:49:59 compute-1 python3.9[114028]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:49:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:59 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:49:59 compute-1 ceph-mon[80077]: pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:49:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:49:59 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:50:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:00 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:00 compute-1 sudo[114183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpazzqxnfqfteptdnlkjzjwsobclyuum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100999.9317322-63-220249562547186/AnsiballZ_setup.py'
Dec 07 09:50:00 compute-1 sudo[114183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:00 compute-1 python3.9[114185]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:50:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:00 compute-1 ceph-mon[80077]: overall HEALTH_OK
Dec 07 09:50:00 compute-1 sudo[114183]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:00 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:01 compute-1 sudo[114267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsvbhtcexnhjzgqpilsdkksjhqbksumu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765100999.9317322-63-220249562547186/AnsiballZ_dnf.py'
Dec 07 09:50:01 compute-1 sudo[114267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:50:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:01.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:50:01 compute-1 python3.9[114269]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 07 09:50:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:01 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:01 compute-1 ceph-mon[80077]: pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Dec 07 09:50:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:02 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be00041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:50:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:02.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:50:02 compute-1 sudo[114267]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:02 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:50:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:02 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:50:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:03.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:50:03 compute-1 python3.9[114421]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:50:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:03 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004a10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:03 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:50:03 compute-1 ceph-mon[80077]: pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Dec 07 09:50:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:04 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:04.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:04 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:05 compute-1 python3.9[114573]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 07 09:50:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:05.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:05 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095005 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:50:05 compute-1 ceph-mon[80077]: pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Dec 07 09:50:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:05 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:50:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:05 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:50:05 compute-1 python3.9[114725]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:50:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:06 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004a30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:50:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:06.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:50:06 compute-1 python3.9[114876]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:50:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:06 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:07 compute-1 sshd-session[113878]: Connection closed by 192.168.122.30 port 51698
Dec 07 09:50:07 compute-1 sshd-session[113875]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:50:07 compute-1 systemd[1]: session-46.scope: Deactivated successfully.
Dec 07 09:50:07 compute-1 systemd[1]: session-46.scope: Consumed 6.563s CPU time.
Dec 07 09:50:07 compute-1 systemd-logind[796]: Session 46 logged out. Waiting for processes to exit.
Dec 07 09:50:07 compute-1 systemd-logind[796]: Removed session 46.
Dec 07 09:50:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:07.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:07 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c00001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:07 compute-1 ceph-mon[80077]: pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 07 09:50:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:08 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec002770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:08 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:50:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:08.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:08 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:50:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:08 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec002770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:50:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:09.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:50:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:09 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:10 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c00001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:10 compute-1 ceph-mon[80077]: pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 07 09:50:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000023s ======
Dec 07 09:50:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:10.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 07 09:50:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:10 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c00001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095011 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:50:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000024s ======
Dec 07 09:50:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:11.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 07 09:50:11 compute-1 ceph-mon[80077]: pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 07 09:50:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:11 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:12 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8002c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:12 compute-1 sshd-session[114904]: Accepted publickey for zuul from 192.168.122.30 port 45096 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:50:12 compute-1 systemd-logind[796]: New session 47 of user zuul.
Dec 07 09:50:12 compute-1 systemd[1]: Started Session 47 of User zuul.
Dec 07 09:50:12 compute-1 sshd-session[114904]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:50:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:50:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:12.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:12 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c00001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:13.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:13 compute-1 ceph-mon[80077]: pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:50:13 compute-1 python3.9[115057]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:50:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:13 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec001e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:13 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:50:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:14 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec001e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000023s ======
Dec 07 09:50:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:14.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 07 09:50:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:14 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:15 compute-1 sudo[115212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfltharyosnykwihfthzfaxxurdsfebq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101014.5704193-110-231746206025395/AnsiballZ_file.py'
Dec 07 09:50:15 compute-1 sudo[115212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:15.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:15 compute-1 python3.9[115214]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:50:15 compute-1 sudo[115212]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:15 compute-1 ceph-mon[80077]: pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:50:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:15 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c00001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:15 compute-1 sudo[115364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbaoyvowwxtvfpftbrvpdeehkdgslect ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101015.515335-110-196774165953109/AnsiballZ_file.py'
Dec 07 09:50:15 compute-1 sudo[115364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:16 compute-1 python3.9[115366]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:50:16 compute-1 sudo[115364]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:16 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec001e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:16.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:16 compute-1 sudo[115517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwmkjkchsygaazffkdjffnwyjlegeety ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101016.3730583-157-260516848386686/AnsiballZ_stat.py'
Dec 07 09:50:16 compute-1 sudo[115517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:16 compute-1 python3.9[115519]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:16 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:17 compute-1 sudo[115517]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:17 compute-1 sudo[115524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:50:17 compute-1 sudo[115524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:50:17 compute-1 sudo[115524]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000023s ======
Dec 07 09:50:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:17.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 07 09:50:17 compute-1 ceph-mon[80077]: pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec 07 09:50:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:17 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:17 compute-1 sudo[115665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwcnydvxconktndeuyjdhawlpxbzcrtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101016.3730583-157-260516848386686/AnsiballZ_copy.py'
Dec 07 09:50:17 compute-1 sudo[115665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:17 compute-1 python3.9[115667]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101016.3730583-157-260516848386686/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=947b77cc97e4919b51a8546a4523af77f78b8680 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:17 compute-1 sudo[115665]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:18 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c00001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:18 compute-1 sudo[115818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlzkjatbrqgoojyavrmhgjhmlrvdsxvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101017.973608-157-230249804225188/AnsiballZ_stat.py'
Dec 07 09:50:18 compute-1 sudo[115818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:18 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:50:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000024s ======
Dec 07 09:50:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:18.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 07 09:50:18 compute-1 python3.9[115820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:18 compute-1 sudo[115818]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:18 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec001e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:19 compute-1 sudo[115941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksylnordexmvbvudmnjbtkharbccxobf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101017.973608-157-230249804225188/AnsiballZ_copy.py'
Dec 07 09:50:19 compute-1 sudo[115941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000023s ======
Dec 07 09:50:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:19.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 07 09:50:19 compute-1 python3.9[115943]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101017.973608-157-230249804225188/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=34e398db9ed1f36bf489ed136dc8e2f62ebc4eab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:19 compute-1 sudo[115941]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:19 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:19 compute-1 ceph-mon[80077]: pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec 07 09:50:19 compute-1 sudo[116093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsegetneywebngaqdcicekrstwrztgdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101019.529496-157-61624306328123/AnsiballZ_stat.py'
Dec 07 09:50:19 compute-1 sudo[116093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:19 compute-1 python3.9[116095]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:19 compute-1 sudo[116093]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:20 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:20 compute-1 sudo[116217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqcsieizngdfwwiywvswmjblfnezpksn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101019.529496-157-61624306328123/AnsiballZ_copy.py'
Dec 07 09:50:20 compute-1 sudo[116217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:20 compute-1 python3.9[116219]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101019.529496-157-61624306328123/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=868a1e60c433fe6c14a9010628ca5dd49e2176f6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:20 compute-1 sudo[116217]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000024s ======
Dec 07 09:50:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:20.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 07 09:50:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:20 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c00001ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:21 compute-1 sudo[116369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzclnxwbkokdhfdfhyzrhclylunflbok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101020.8806083-285-94537552318805/AnsiballZ_file.py'
Dec 07 09:50:21 compute-1 sudo[116369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000023s ======
Dec 07 09:50:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:21.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 07 09:50:21 compute-1 python3.9[116371]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:50:21 compute-1 sudo[116369]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:21 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec001e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:21 compute-1 ceph-mon[80077]: pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Dec 07 09:50:21 compute-1 sudo[116521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiscbhtmxpfxgediljhiingimbriripd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101021.5934503-285-6804378430936/AnsiballZ_file.py'
Dec 07 09:50:21 compute-1 sudo[116521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:22 compute-1 python3.9[116523]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:50:22 compute-1 sudo[116521]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:22 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:22.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:22 compute-1 sudo[116674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycqciqdedgylmqluctisugwympmkzwfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101022.3682163-334-2245698911926/AnsiballZ_stat.py'
Dec 07 09:50:22 compute-1 sudo[116674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:22 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:23 compute-1 python3.9[116676]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:23 compute-1 sudo[116674]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:23.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:23 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:23 compute-1 sudo[116797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuzqifwzfphdxawqlpiinbdahnxlomgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101022.3682163-334-2245698911926/AnsiballZ_copy.py'
Dec 07 09:50:23 compute-1 sudo[116797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:23 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:50:23 compute-1 sudo[116800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:50:23 compute-1 sudo[116800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:50:23 compute-1 sudo[116800]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:23 compute-1 python3.9[116799]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101022.3682163-334-2245698911926/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=06daff698ebbf7da4c3e8acf751249186116622b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:23 compute-1 sudo[116825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 07 09:50:23 compute-1 sudo[116825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:50:23 compute-1 ceph-mon[80077]: pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:50:23 compute-1 sudo[116797]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:24 compute-1 sudo[116825]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:24 compute-1 sudo[117021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skubjjxxmjxhyeufweayleczvissqlij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101023.8742702-334-141007852409821/AnsiballZ_stat.py'
Dec 07 09:50:24 compute-1 sudo[117021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:24 compute-1 sudo[117020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:50:24 compute-1 sudo[117020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:50:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:24 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:24 compute-1 sudo[117020]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:24 compute-1 sudo[117048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:50:24 compute-1 sudo[117048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:50:24 compute-1 python3.9[117035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:24 compute-1 sudo[117021]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000024s ======
Dec 07 09:50:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:24.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 07 09:50:24 compute-1 sudo[117214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trayqsrzcgjxgckkobgstehebcatdjmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101023.8742702-334-141007852409821/AnsiballZ_copy.py'
Dec 07 09:50:24 compute-1 sudo[117214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:24 compute-1 sudo[117048]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:24 compute-1 python3.9[117221]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101023.8742702-334-141007852409821/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=7f7d1cd622d2240bbe15befe04459424cf20396a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:24 compute-1 sudo[117214]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:24 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:50:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:50:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:50:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:50:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:50:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:50:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:50:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:50:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:50:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:50:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:50:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:25.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:25 compute-1 sudo[117376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yevkggtzgnamwqidetbpgqhztvmskueh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101025.0347204-334-100215016515729/AnsiballZ_stat.py'
Dec 07 09:50:25 compute-1 sudo[117376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:25 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c00004840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:25 compute-1 python3.9[117378]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:25 compute-1 sudo[117376]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:25 compute-1 sudo[117499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcufqmlczuljziiwclcgnqndynxcycbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101025.0347204-334-100215016515729/AnsiballZ_copy.py'
Dec 07 09:50:25 compute-1 sudo[117499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:26 compute-1 python3.9[117501]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101025.0347204-334-100215016515729/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=a4ffcd68bf7c6aee3efe03b7bee99f9a55e5dd8f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:26 compute-1 sudo[117499]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:26 compute-1 ceph-mon[80077]: pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:50:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:26 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004cd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:26 compute-1 sudo[117652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvucpsnzhlbjzccvdrtyjpbplpilksxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101026.3263648-467-121405482199636/AnsiballZ_file.py'
Dec 07 09:50:26 compute-1 sudo[117652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:26.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:26 compute-1 python3.9[117654]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:50:26 compute-1 sudo[117652]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:26 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:27 compute-1 sudo[117804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epldmxghhuhfbpxopyptieqtdzjmljbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101027.021847-467-29523796285620/AnsiballZ_file.py'
Dec 07 09:50:27 compute-1 sudo[117804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:27.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:27 compute-1 python3.9[117806]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:50:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:27 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:27 compute-1 sudo[117804]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095027 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:50:27 compute-1 sudo[117956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdekszsbxqsrpzedeajibccruihupzkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101027.6952074-512-134521216846944/AnsiballZ_stat.py'
Dec 07 09:50:27 compute-1 sudo[117956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:28 compute-1 ceph-mon[80077]: pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:50:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:50:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:28 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c00004840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:28 compute-1 python3.9[117958]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:28 compute-1 sudo[117956]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:28 compute-1 sudo[118080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktlpipzzezsjwqqfeianxwfdyecxyzut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101027.6952074-512-134521216846944/AnsiballZ_copy.py'
Dec 07 09:50:28 compute-1 sudo[118080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:28 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:50:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:28.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:28 compute-1 python3.9[118082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101027.6952074-512-134521216846944/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=5e6d1843735f38090f3d639c50e6451a3705cf01 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:28 compute-1 sudo[118080]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:28 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004cf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:29 compute-1 sudo[118232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyzpzacmfnpyfrgoesloswyovkupfeak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101028.9581237-512-140445039690876/AnsiballZ_stat.py'
Dec 07 09:50:29 compute-1 sudo[118232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:29 compute-1 sudo[118235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:50:29 compute-1 sudo[118235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:50:29 compute-1 sudo[118235]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000024s ======
Dec 07 09:50:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:29.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 07 09:50:29 compute-1 python3.9[118234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:29 compute-1 sudo[118232]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:29 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:29 compute-1 sudo[118380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxealysdjzlmjksbcnsebafemjgjqfla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101028.9581237-512-140445039690876/AnsiballZ_copy.py'
Dec 07 09:50:29 compute-1 sudo[118380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:30 compute-1 python3.9[118382]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101028.9581237-512-140445039690876/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=7f7d1cd622d2240bbe15befe04459424cf20396a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:30 compute-1 sudo[118380]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:30 compute-1 ceph-mon[80077]: pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:50:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:50:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:50:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:30 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:30 compute-1 sudo[118533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuvfnlfuatyvgxerpbdwsnaljqhntdea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101030.1896508-512-62936645189068/AnsiballZ_stat.py'
Dec 07 09:50:30 compute-1 sudo[118533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000023s ======
Dec 07 09:50:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:30.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 07 09:50:30 compute-1 python3.9[118535]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:30 compute-1 sudo[118533]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:30 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:31 compute-1 sudo[118656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceztvaherqejkarcofdsmiodxftakcfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101030.1896508-512-62936645189068/AnsiballZ_copy.py'
Dec 07 09:50:31 compute-1 sudo[118656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:31 compute-1 python3.9[118658]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101030.1896508-512-62936645189068/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=cdc2ec88d8bae19085451cae035f316b8903db7f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:31 compute-1 sudo[118656]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000024s ======
Dec 07 09:50:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:31.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 07 09:50:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:31 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004d10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:32 compute-1 ceph-mon[80077]: pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:50:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:32 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:32 compute-1 sudo[118809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fchbyvnxxeyvkzbuqbktcptexfhpwrmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101031.9737792-686-257939762269374/AnsiballZ_file.py'
Dec 07 09:50:32 compute-1 sudo[118809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:32 compute-1 python3.9[118811]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:50:32 compute-1 sudo[118809]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000024s ======
Dec 07 09:50:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:32.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 07 09:50:32 compute-1 sudo[118961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ictupycdchaznsyctjakdwumblgxtwso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101032.6905503-716-238926792047628/AnsiballZ_stat.py'
Dec 07 09:50:32 compute-1 sudo[118961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:33 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c00004840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:33 compute-1 python3.9[118963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:33 compute-1 sudo[118961]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:33.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:33 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:33 compute-1 sudo[119084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpoiouaddtnruypxoenlbbwdufodixcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101032.6905503-716-238926792047628/AnsiballZ_copy.py'
Dec 07 09:50:33 compute-1 sudo[119084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:33 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:50:33 compute-1 python3.9[119086]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101032.6905503-716-238926792047628/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:33 compute-1 sudo[119084]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:34 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:34 compute-1 ceph-mon[80077]: pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:50:34 compute-1 sudo[119237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muqvjgeitzqudjcdrrlgiouetipkohva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101034.0040295-771-242934486871551/AnsiballZ_file.py'
Dec 07 09:50:34 compute-1 sudo[119237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:34 compute-1 python3.9[119239]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:50:34 compute-1 sudo[119237]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000023s ======
Dec 07 09:50:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:34.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 07 09:50:34 compute-1 sudo[119389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ongmdyaobrhtbzgojciqpuxutsbdkybl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101034.6588075-795-67551665347171/AnsiballZ_stat.py'
Dec 07 09:50:34 compute-1 sudo[119389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:35 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:35 compute-1 python3.9[119391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:35 compute-1 sudo[119389]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000024s ======
Dec 07 09:50:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:35.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 07 09:50:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:35 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8c00004840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:35 compute-1 sudo[119512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuxxllhabbqiatlxwgyjxkmcfidfqbdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101034.6588075-795-67551665347171/AnsiballZ_copy.py'
Dec 07 09:50:35 compute-1 sudo[119512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:35 compute-1 python3.9[119514]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101034.6588075-795-67551665347171/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:35 compute-1 sudo[119512]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:36 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bd8004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:36 compute-1 ceph-mon[80077]: pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:50:36 compute-1 sudo[119666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyhwjtqrscqregkhyhrnwkjurstehles ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101036.029442-842-73606039505685/AnsiballZ_file.py'
Dec 07 09:50:36 compute-1 sudo[119666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:36 compute-1 python3.9[119668]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:50:36 compute-1 sudo[119666]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000024s ======
Dec 07 09:50:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:36.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 07 09:50:36 compute-1 sudo[119818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptuhrcyyxeenyeahszyuzkzbwhrugwsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101036.7042408-868-100958191746052/AnsiballZ_stat.py'
Dec 07 09:50:36 compute-1 sudo[119818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:37 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:37 compute-1 sudo[119822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:50:37 compute-1 sudo[119822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:50:37 compute-1 sudo[119822]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:37 compute-1 python3.9[119820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:37 compute-1 sudo[119818]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:37.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:37 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:50:37 compute-1 ceph-mon[80077]: pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:50:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:37 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:37 compute-1 sudo[119967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvuifvgsrbfgdktgfgcjoxktknwxelpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101036.7042408-868-100958191746052/AnsiballZ_copy.py'
Dec 07 09:50:37 compute-1 sudo[119967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:37 compute-1 python3.9[119969]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101036.7042408-868-100958191746052/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:37 compute-1 sudo[119967]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:38 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:38 compute-1 sudo[120120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfonnzjdkwmebgedsxkhadzrcptuaqqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101037.9544535-911-127403349789085/AnsiballZ_file.py'
Dec 07 09:50:38 compute-1 sudo[120120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:38 compute-1 python3.9[120122]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:50:38 compute-1 sudo[120120]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:38 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:50:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:38.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:39 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc0014f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:39 compute-1 sudo[120272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhncnkrfefizyytgnyxbtadkzrnlkwkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101038.732727-938-166278894854252/AnsiballZ_stat.py'
Dec 07 09:50:39 compute-1 sudo[120272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:39 compute-1 python3.9[120274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:39 compute-1 sudo[120272]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:39.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:39 compute-1 ceph-mon[80077]: pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:50:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:39 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:39 compute-1 sudo[120395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvjggnegepwapnbczhrssllxztdstbnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101038.732727-938-166278894854252/AnsiballZ_copy.py'
Dec 07 09:50:39 compute-1 sudo[120395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:39 compute-1 python3.9[120397]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101038.732727-938-166278894854252/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:39 compute-1 sudo[120395]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:40 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:40 compute-1 sudo[120548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbztcyeftwbnapjsagorjwuoqmsgngmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101040.0453825-989-55358895412150/AnsiballZ_file.py'
Dec 07 09:50:40 compute-1 sudo[120548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:40 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:50:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:40 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:50:40 compute-1 python3.9[120550]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:50:40 compute-1 sudo[120548]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000024s ======
Dec 07 09:50:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:40.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 07 09:50:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:41 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:41 compute-1 sudo[120700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkuilzszidlbvepgagyqwuoabzqttvej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101040.7574275-1014-171732710498357/AnsiballZ_stat.py'
Dec 07 09:50:41 compute-1 sudo[120700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:41 compute-1 python3.9[120702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:41 compute-1 sudo[120700]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:41.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:41 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc0014f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:41 compute-1 ceph-mon[80077]: pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec 07 09:50:41 compute-1 sudo[120823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqrqntyomyujgeoolsjbasklegoabobh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101040.7574275-1014-171732710498357/AnsiballZ_copy.py'
Dec 07 09:50:41 compute-1 sudo[120823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:41 compute-1 python3.9[120825]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101040.7574275-1014-171732710498357/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:41 compute-1 sudo[120823]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:42 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:42 compute-1 sudo[120976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wetrtyahbwaeskgjmsduhfjtpishqvom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101042.0594947-1054-101897499688823/AnsiballZ_file.py'
Dec 07 09:50:42 compute-1 sudo[120976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:50:42 compute-1 python3.9[120978]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:50:42 compute-1 sudo[120976]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:42.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:43 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0001f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:43 compute-1 sudo[121128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyarvmtrpwbbiegnaomwkyrpadtrqpmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101042.790138-1070-238947524522895/AnsiballZ_stat.py'
Dec 07 09:50:43 compute-1 sudo[121128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:43.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:43 compute-1 python3.9[121130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:43 compute-1 sudo[121128]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:43 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:50:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:43 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:43 compute-1 ceph-mon[80077]: pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.574680) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101043574722, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 836, "num_deletes": 251, "total_data_size": 1799674, "memory_usage": 1830848, "flush_reason": "Manual Compaction"}
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101043586153, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 1169750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12651, "largest_seqno": 13482, "table_properties": {"data_size": 1165861, "index_size": 1669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8633, "raw_average_key_size": 19, "raw_value_size": 1158030, "raw_average_value_size": 2550, "num_data_blocks": 73, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100988, "oldest_key_time": 1765100988, "file_creation_time": 1765101043, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11550 microseconds, and 4359 cpu microseconds.
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.586226) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 1169750 bytes OK
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.586256) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.588060) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.588080) EVENT_LOG_v1 {"time_micros": 1765101043588073, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.588104) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1795383, prev total WAL file size 1795383, number of live WAL files 2.
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.591792) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(1142KB)], [24(13MB)]
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101043591873, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 15260564, "oldest_snapshot_seqno": -1}
Dec 07 09:50:43 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4354 keys, 13471226 bytes, temperature: kUnknown
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101043760224, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 13471226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13439138, "index_size": 20074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 111491, "raw_average_key_size": 25, "raw_value_size": 13356627, "raw_average_value_size": 3067, "num_data_blocks": 849, "num_entries": 4354, "num_filter_entries": 4354, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765101043, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:50:43 compute-1 sudo[121251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgmtcxbnsmhwyignodzoalizcsiyazpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101042.790138-1070-238947524522895/AnsiballZ_copy.py'
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.760609) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13471226 bytes
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.762716) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.6 rd, 80.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 13.4 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(24.6) write-amplify(11.5) OK, records in: 4872, records dropped: 518 output_compression: NoCompression
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.762739) EVENT_LOG_v1 {"time_micros": 1765101043762728, "job": 12, "event": "compaction_finished", "compaction_time_micros": 168434, "compaction_time_cpu_micros": 31297, "output_level": 6, "num_output_files": 1, "total_output_size": 13471226, "num_input_records": 4872, "num_output_records": 4354, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101043763429, "job": 12, "event": "table_file_deletion", "file_number": 26}
Dec 07 09:50:43 compute-1 sudo[121251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101043767642, "job": 12, "event": "table_file_deletion", "file_number": 24}
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.591659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.767836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.767846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.767848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.767850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:50:43 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:50:43.767852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:50:43 compute-1 python3.9[121253]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101042.790138-1070-238947524522895/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=04e3974ae626deea30737932cd4a2d2f473c7179 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:43 compute-1 sudo[121251]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 09:50:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 8331 writes, 34K keys, 8331 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8331 writes, 1678 syncs, 4.96 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8331 writes, 34K keys, 8331 commit groups, 1.0 writes per commit group, ingest: 21.61 MB, 0.04 MB/s
                                           Interval WAL: 8331 writes, 1678 syncs, 4.96 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 07 09:50:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:44 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc0014f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:44.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:45 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:45.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:45 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0001f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:45 compute-1 ceph-mon[80077]: pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:50:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:46 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:46 compute-1 sshd-session[114907]: Connection closed by 192.168.122.30 port 45096
Dec 07 09:50:46 compute-1 sshd-session[114904]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:50:46 compute-1 systemd[1]: session-47.scope: Deactivated successfully.
Dec 07 09:50:46 compute-1 systemd[1]: session-47.scope: Consumed 24.331s CPU time.
Dec 07 09:50:46 compute-1 systemd-logind[796]: Session 47 logged out. Waiting for processes to exit.
Dec 07 09:50:46 compute-1 systemd-logind[796]: Removed session 47.
Dec 07 09:50:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:46.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:47 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8be0001f90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000024s ======
Dec 07 09:50:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:47.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 07 09:50:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:47 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bf4004d50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:47 compute-1 ceph-mon[80077]: pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 925 B/s wr, 3 op/s
Dec 07 09:50:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:48 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bdc0014f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:50:48 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:50:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:48.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[98900]: 07/12/2025 09:50:49 : epoch 69354d35 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8bec004250 fd 39 proxy ignored for local
Dec 07 09:50:49 compute-1 kernel: ganesha.nfsd[114651]: segfault at 50 ip 00007f8cb217632e sp 00007f8c6a7fb210 error 4 in libntirpc.so.5.8[7f8cb215b000+2c000] likely on CPU 0 (core 0, socket 0)
Dec 07 09:50:49 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 09:50:49 compute-1 systemd[1]: Started Process Core Dump (PID 121281/UID 0).
Dec 07 09:50:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:49.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:49 compute-1 ceph-mon[80077]: pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 925 B/s wr, 3 op/s
Dec 07 09:50:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095049 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:50:50 compute-1 systemd-coredump[121282]: Process 98904 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 63:
                                                    #0  0x00007f8cb217632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 09:50:50 compute-1 systemd[1]: systemd-coredump@1-121281-0.service: Deactivated successfully.
Dec 07 09:50:50 compute-1 systemd[1]: systemd-coredump@1-121281-0.service: Consumed 1.250s CPU time.
Dec 07 09:50:50 compute-1 podman[121288]: 2025-12-07 09:50:50.454406565 +0000 UTC m=+0.030108617 container died b20fbabdf5d86f1705daf3a3804c9c9ec4924e08e9ec968d8a9e839e86081a83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 07 09:50:50 compute-1 systemd[1]: var-lib-containers-storage-overlay-93f4b3f80c333a59efc994b43badc3d571eb467c4b8f45d7807b8e8067db1d7a-merged.mount: Deactivated successfully.
Dec 07 09:50:50 compute-1 podman[121288]: 2025-12-07 09:50:50.496448213 +0000 UTC m=+0.072150235 container remove b20fbabdf5d86f1705daf3a3804c9c9ec4924e08e9ec968d8a9e839e86081a83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 07 09:50:50 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 09:50:50 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 09:50:50 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.869s CPU time.
Dec 07 09:50:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:50.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000023s ======
Dec 07 09:50:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:51.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 07 09:50:51 compute-1 ceph-mon[80077]: pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 925 B/s wr, 3 op/s
Dec 07 09:50:51 compute-1 sshd-session[121332]: Accepted publickey for zuul from 192.168.122.30 port 35940 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:50:51 compute-1 systemd-logind[796]: New session 48 of user zuul.
Dec 07 09:50:51 compute-1 systemd[1]: Started Session 48 of User zuul.
Dec 07 09:50:51 compute-1 sshd-session[121332]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:50:52 compute-1 sudo[121486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pscezyiypxvthoibtuyompvdizkuihun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101052.0342915-27-267575889356969/AnsiballZ_file.py'
Dec 07 09:50:52 compute-1 sudo[121486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:52.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:52 compute-1 python3.9[121488]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:52 compute-1 sudo[121486]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:53.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:50:53 compute-1 sudo[121638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxsuumqqpojljrtsitlcddvdztwmicff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101053.0817597-63-79769884799869/AnsiballZ_stat.py'
Dec 07 09:50:53 compute-1 sudo[121638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:53 compute-1 python3.9[121640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:53 compute-1 sudo[121638]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:54 compute-1 ceph-mon[80077]: pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 420 B/s wr, 1 op/s
Dec 07 09:50:54 compute-1 sudo[121762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqxlriqigvcpdcnuotphfpltrmzpefet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101053.0817597-63-79769884799869/AnsiballZ_copy.py'
Dec 07 09:50:54 compute-1 sudo[121762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:54 compute-1 python3.9[121764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101053.0817597-63-79769884799869/.source.conf _original_basename=ceph.conf follow=False checksum=af72f8d2b9ff82597d6797e3be25005bcbb0448d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:54 compute-1 sudo[121762]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:54.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:54 compute-1 sudo[121914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjqvvlxawtiqbcnxypvhyxfrvsmkvndb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101054.6940641-63-60869587265668/AnsiballZ_stat.py'
Dec 07 09:50:54 compute-1 sudo[121914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095055 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:50:55 compute-1 python3.9[121916]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:50:55 compute-1 sudo[121914]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000023s ======
Dec 07 09:50:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:55.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 07 09:50:55 compute-1 ceph-mon[80077]: pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 420 B/s wr, 2 op/s
Dec 07 09:50:55 compute-1 sudo[122037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebnczyjbfozvlxikdnukyalkizgkwetn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101054.6940641-63-60869587265668/AnsiballZ_copy.py'
Dec 07 09:50:55 compute-1 sudo[122037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:50:55 compute-1 python3.9[122039]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101054.6940641-63-60869587265668/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=2eec074211d5644630d1561f0b2053eaf094bdc2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:50:55 compute-1 sudo[122037]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:56 compute-1 sshd-session[121335]: Connection closed by 192.168.122.30 port 35940
Dec 07 09:50:56 compute-1 sshd-session[121332]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:50:56 compute-1 systemd[1]: session-48.scope: Deactivated successfully.
Dec 07 09:50:56 compute-1 systemd[1]: session-48.scope: Consumed 2.854s CPU time.
Dec 07 09:50:56 compute-1 systemd-logind[796]: Session 48 logged out. Waiting for processes to exit.
Dec 07 09:50:56 compute-1 systemd-logind[796]: Removed session 48.
Dec 07 09:50:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:50:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:56.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:50:57 compute-1 sudo[122065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:50:57 compute-1 sudo[122065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:50:57 compute-1 sudo[122065]: pam_unix(sudo:session): session closed for user root
Dec 07 09:50:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:57.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:50:57 compute-1 ceph-mon[80077]: pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 168 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:50:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:50:58 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:50:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:50:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:50:58.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:50:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:50:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:50:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:50:59.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:00 compute-1 ceph-mon[80077]: pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:51:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:51:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:00.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:51:00 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 2.
Dec 07 09:51:00 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:51:00 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.869s CPU time.
Dec 07 09:51:00 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:51:00 compute-1 podman[122138]: 2025-12-07 09:51:00.961281151 +0000 UTC m=+0.041694606 container create 53a7ed53da57fa4b431c918cb8dbcdc8378be7a2d2553973f58796e64b8ee5bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:51:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bfbe1841630305909283f95b5e300cd0f5f1fd7c498a548e71a02c827061dd1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 09:51:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bfbe1841630305909283f95b5e300cd0f5f1fd7c498a548e71a02c827061dd1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:51:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bfbe1841630305909283f95b5e300cd0f5f1fd7c498a548e71a02c827061dd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:51:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bfbe1841630305909283f95b5e300cd0f5f1fd7c498a548e71a02c827061dd1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:51:01 compute-1 podman[122138]: 2025-12-07 09:51:01.024436479 +0000 UTC m=+0.104849944 container init 53a7ed53da57fa4b431c918cb8dbcdc8378be7a2d2553973f58796e64b8ee5bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 07 09:51:01 compute-1 podman[122138]: 2025-12-07 09:51:01.030646209 +0000 UTC m=+0.111059664 container start 53a7ed53da57fa4b431c918cb8dbcdc8378be7a2d2553973f58796e64b8ee5bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:51:01 compute-1 bash[122138]: 53a7ed53da57fa4b431c918cb8dbcdc8378be7a2d2553973f58796e64b8ee5bc
Dec 07 09:51:01 compute-1 podman[122138]: 2025-12-07 09:51:00.942439828 +0000 UTC m=+0.022853303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:51:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:01 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 09:51:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:01 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 09:51:01 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:51:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:01 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 09:51:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:01 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 09:51:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:01 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 09:51:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:01 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 09:51:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:01 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 09:51:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:01 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:51:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:01.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:02 compute-1 sshd-session[122196]: Accepted publickey for zuul from 192.168.122.30 port 52942 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:51:02 compute-1 ceph-mon[80077]: pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:51:02 compute-1 systemd-logind[796]: New session 49 of user zuul.
Dec 07 09:51:02 compute-1 systemd[1]: Started Session 49 of User zuul.
Dec 07 09:51:02 compute-1 sshd-session[122196]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:51:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:02.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:03 compute-1 python3.9[122350]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:51:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:03.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:03 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:51:04 compute-1 sudo[122505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syfgxzhgcllargazntssztoiqwpfzlqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101063.7276437-63-265219220889034/AnsiballZ_file.py'
Dec 07 09:51:04 compute-1 sudo[122505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:04 compute-1 ceph-mon[80077]: pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:51:04 compute-1 python3.9[122507]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:51:04 compute-1 sudo[122505]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:04.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:04 compute-1 sudo[122657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaasnlgkzxpppgothgilzmtzhldtaqmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101064.5069594-63-280205620227151/AnsiballZ_file.py'
Dec 07 09:51:04 compute-1 sudo[122657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:05 compute-1 python3.9[122659]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:51:05 compute-1 sudo[122657]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:05.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:05 compute-1 python3.9[122809]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:51:06 compute-1 sudo[122960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gepvdcqfjxohszxcykknwaixnwgeonlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101066.1798568-132-63899002828230/AnsiballZ_seboolean.py'
Dec 07 09:51:06 compute-1 sudo[122960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:06.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:06 compute-1 python3.9[122962]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 07 09:51:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:07 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:51:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:07 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:51:07 compute-1 ceph-mon[80077]: pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:51:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:07.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:08.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:08 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:51:09 compute-1 ceph-mon[80077]: pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:51:09 compute-1 sudo[122960]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:09.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:10.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:11.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095111 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:51:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:12.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:12 compute-1 ceph-mon[80077]: pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 09:51:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:13.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 09:51:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2268 writes, 13K keys, 2268 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s
                                           Cumulative WAL: 2268 writes, 2268 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2268 writes, 13K keys, 2268 commit groups, 1.0 writes per commit group, ingest: 36.50 MB, 0.06 MB/s
                                           Interval WAL: 2268 writes, 2268 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     73.3      0.30              0.08         6    0.050       0      0       0.0       0.0
                                             L6      1/0   12.85 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.9    109.3     95.5      0.65              0.20         5    0.131     21K   2283       0.0       0.0
                                            Sum      1/0   12.85 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9     75.1     88.5      0.95              0.29        11    0.087     21K   2283       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9     75.2     88.7      0.95              0.29        10    0.095     21K   2283       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    109.3     95.5      0.65              0.20         5    0.131     21K   2283       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     73.7      0.30              0.08         5    0.059       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.021, interval 0.021
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.0 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563169dd350#2 capacity: 304.00 MB usage: 2.43 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(160,2.23 MB,0.733456%) FilterBlock(11,69.92 KB,0.0224615%) IndexBlock(11,133.39 KB,0.0428501%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 07 09:51:13 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:51:14 compute-1 ceph-mon[80077]: pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Dec 07 09:51:14 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:51:14 compute-1 ceph-mon[80077]: pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Dec 07 09:51:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:14.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:15 compute-1 sudo[123120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajtvwycfgivrrvatkzjnjqkivemseydn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101074.65873-162-204431843303831/AnsiballZ_setup.py'
Dec 07 09:51:15 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 07 09:51:15 compute-1 sudo[123120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:15 compute-1 python3.9[123122]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:51:15 compute-1 ceph-mon[80077]: pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 09:51:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:15.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:15 compute-1 sudo[123120]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:15 compute-1 sudo[123204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khtxpeyoukdmmccyhqkaqavdkytopqem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101074.65873-162-204431843303831/AnsiballZ_dnf.py'
Dec 07 09:51:15 compute-1 sudo[123204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:16 compute-1 python3.9[123206]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 09:51:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:16 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:51:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.003000080s ======
Dec 07 09:51:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:16.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec 07 09:51:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:17 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8468000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095117 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:51:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [NOTICE] 340/095117 (4) : haproxy version is 2.3.17-d1c9119
Dec 07 09:51:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [NOTICE] 340/095117 (4) : path to executable is /usr/local/sbin/haproxy
Dec 07 09:51:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [ALERT] 340/095117 (4) : backend 'backend' has no server available!
Dec 07 09:51:17 compute-1 sudo[123225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:51:17 compute-1 sudo[123225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:51:17 compute-1 sudo[123225]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:17.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:17 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:17 compute-1 sudo[123204]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:18 compute-1 ceph-mon[80077]: pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Dec 07 09:51:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:18 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:18 compute-1 sudo[123400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqodjpalpnxejuqtvimcnxgpdeffezqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101077.8508463-198-208588345862515/AnsiballZ_systemd.py'
Dec 07 09:51:18 compute-1 sudo[123400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:18.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:18 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:51:18 compute-1 python3.9[123402]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 07 09:51:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095119 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:51:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:19 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:19 compute-1 sudo[123400]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:19 compute-1 ceph-mon[80077]: pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Dec 07 09:51:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:19.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:19 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:19 compute-1 sudo[123555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anjifaothovtkshfsebbdebuyxkxqknt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765101079.3066413-222-31082808866792/AnsiballZ_edpm_nftables_snippet.py'
Dec 07 09:51:19 compute-1 sudo[123555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:19 compute-1 python3[123557]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 07 09:51:19 compute-1 sudo[123555]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:20 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:20 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:51:20 compute-1 sudo[123708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugmtlfzbfewwroyazstmmjzffesktcnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101080.2953231-249-273345555137472/AnsiballZ_file.py'
Dec 07 09:51:20 compute-1 sudo[123708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:20.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:20 compute-1 python3.9[123710]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:20 compute-1 sudo[123708]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:21 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:21.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:21 compute-1 sudo[123860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usonebxtprvzvigjdvylqmkfpkmtythj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101081.097364-273-93569679924082/AnsiballZ_stat.py'
Dec 07 09:51:21 compute-1 sudo[123860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:21 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:21 compute-1 ceph-mon[80077]: pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 09:51:21 compute-1 python3.9[123862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:21 compute-1 sudo[123860]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:21 compute-1 sudo[123938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bujumbzaclxlfdqbhxromxcnfdydbniz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101081.097364-273-93569679924082/AnsiballZ_file.py'
Dec 07 09:51:21 compute-1 sudo[123938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:22 compute-1 python3.9[123940]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:22 compute-1 sudo[123938]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:22 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:22.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:22 compute-1 sudo[124091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atenfidfwdneunqgwuvvgowpczxctxfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101082.4543762-309-63088691411023/AnsiballZ_stat.py'
Dec 07 09:51:22 compute-1 sudo[124091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:23 compute-1 python3.9[124093]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:23 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:23 compute-1 sudo[124091]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:23 compute-1 sudo[124169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uunvmlwsizqjfxxmhbavqrrxriyzdvvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101082.4543762-309-63088691411023/AnsiballZ_file.py'
Dec 07 09:51:23 compute-1 sudo[124169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:23 compute-1 python3.9[124171]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.oj7_xfz7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:51:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:23.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:51:23 compute-1 sudo[124169]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:23 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:51:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:23 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:51:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:23 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:23 compute-1 ceph-mon[80077]: pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 07 09:51:23 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:51:23 compute-1 sudo[124321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfjkhrpueecgjflzsgxswrgcoihvqbch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101083.6418483-345-87296815275736/AnsiballZ_stat.py'
Dec 07 09:51:23 compute-1 sudo[124321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:24 compute-1 python3.9[124323]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:24 compute-1 sudo[124321]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:24 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:24 compute-1 sudo[124400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpahddugjyeeztjxiftdlnioffhzfzpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101083.6418483-345-87296815275736/AnsiballZ_file.py'
Dec 07 09:51:24 compute-1 sudo[124400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:24 compute-1 python3.9[124402]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:24 compute-1 sudo[124400]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:24.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:25 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:25 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:51:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:25 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:51:25 compute-1 sudo[124552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dedldhufzzlguwffeemonudvzigajsln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101084.9963396-384-180281870778524/AnsiballZ_command.py'
Dec 07 09:51:25 compute-1 sudo[124552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:25.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:25 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:25 compute-1 python3.9[124554]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:51:25 compute-1 sudo[124552]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:25 compute-1 ceph-mon[80077]: pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s
Dec 07 09:51:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:26 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:26 compute-1 sudo[124706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kemnbrxpflrqorambqilgwmchnwxczjn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765101085.896426-408-210967091336552/AnsiballZ_edpm_nftables_from_files.py'
Dec 07 09:51:26 compute-1 sudo[124706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:26 compute-1 python3[124708]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 07 09:51:26 compute-1 sudo[124706]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:26.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:27 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:27 compute-1 sudo[124858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhmeucslmzpurapsdtehktqukprrzeta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101086.797077-432-152251834244931/AnsiballZ_stat.py'
Dec 07 09:51:27 compute-1 sudo[124858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:27 compute-1 python3.9[124860]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:27 compute-1 sudo[124858]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:27.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:27 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84380016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:28 compute-1 ceph-mon[80077]: pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:51:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:51:28 compute-1 sudo[124983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktziuphhhzqixfruohfcqimtnfahfznz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101086.797077-432-152251834244931/AnsiballZ_copy.py'
Dec 07 09:51:28 compute-1 sudo[124983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:28 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:51:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:28 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:28 compute-1 python3.9[124985]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101086.797077-432-152251834244931/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:28 compute-1 sudo[124983]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:28.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:28 compute-1 sudo[125136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yykphlzqiifitjgsmtdcerpnunijevuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101088.4735231-477-133650860246511/AnsiballZ_stat.py'
Dec 07 09:51:28 compute-1 sudo[125136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:28 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:51:28 compute-1 python3.9[125138]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:29 compute-1 sudo[125136]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:29 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:29 compute-1 sudo[125261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovdryoaeasnvmdgbcympesqhehlvegln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101088.4735231-477-133650860246511/AnsiballZ_copy.py'
Dec 07 09:51:29 compute-1 sudo[125261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:29.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:29 compute-1 sudo[125264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:51:29 compute-1 sudo[125264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:51:29 compute-1 sudo[125264]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:29 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:29 compute-1 sudo[125289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:51:29 compute-1 sudo[125289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:51:29 compute-1 python3.9[125263]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101088.4735231-477-133650860246511/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:29 compute-1 sudo[125261]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:30 compute-1 sudo[125289]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:30 compute-1 ceph-mon[80077]: pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:51:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:51:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:51:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:30 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:30 compute-1 sudo[125495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhemhxkacdotrnhcmzgleqxikadhtqaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101090.003162-522-65766892924565/AnsiballZ_stat.py'
Dec 07 09:51:30 compute-1 sudo[125495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:30 compute-1 python3.9[125497]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:30 compute-1 sudo[125495]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:30.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:30 compute-1 sudo[125620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqtzbhyqkeflfoagizzkuhhyzxilbeat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101090.003162-522-65766892924565/AnsiballZ_copy.py'
Dec 07 09:51:30 compute-1 sudo[125620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:31 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:31 compute-1 python3.9[125622]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101090.003162-522-65766892924565/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:51:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:51:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:51:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:51:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:51:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:51:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:51:31 compute-1 sudo[125620]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:31 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:51:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:31.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:31 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:31 compute-1 sudo[125772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gezorbquwtljowiytywthkmhzhzgyzfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101091.4046772-567-31596293659992/AnsiballZ_stat.py'
Dec 07 09:51:31 compute-1 sudo[125772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:31 compute-1 python3.9[125774]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:31 compute-1 sudo[125772]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:32 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:32 compute-1 ceph-mon[80077]: pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:51:32 compute-1 sudo[125898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-capnlhwavhnjlsaeyfuvakardcadlchb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101091.4046772-567-31596293659992/AnsiballZ_copy.py'
Dec 07 09:51:32 compute-1 sudo[125898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:51:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:32.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:51:32 compute-1 python3.9[125900]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101091.4046772-567-31596293659992/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:32 compute-1 sudo[125898]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:33 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:33 compute-1 sudo[126050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pryaizuvbrhzoxfgzsbwcsfidltkvobw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101093.026304-612-71161881135804/AnsiballZ_stat.py'
Dec 07 09:51:33 compute-1 sudo[126050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:51:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:51:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:33 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:33 compute-1 python3.9[126052]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:33 compute-1 sudo[126050]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095133 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:51:33 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:51:34 compute-1 sudo[126175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkxstdmiqpxieeywlpaodqmotbxlcrza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101093.026304-612-71161881135804/AnsiballZ_copy.py'
Dec 07 09:51:34 compute-1 sudo[126175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:34 compute-1 python3.9[126177]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101093.026304-612-71161881135804/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:34 compute-1 sudo[126175]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:34 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:34 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:51:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:34 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:51:34 compute-1 ceph-mon[80077]: pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 09:51:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:34.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:34 compute-1 sudo[126328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlkquwgnutocuwuslqqtyhwumigxebal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101094.6664407-657-77038351020507/AnsiballZ_file.py'
Dec 07 09:51:34 compute-1 sudo[126328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:35 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:35 compute-1 python3.9[126330]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:35 compute-1 sudo[126328]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:35 compute-1 ceph-mon[80077]: pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Dec 07 09:51:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:35.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:35 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:35 compute-1 sudo[126480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkhlauwmfrupnoefqlyejgldfbcelvpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101095.4810176-681-5923401109029/AnsiballZ_command.py'
Dec 07 09:51:35 compute-1 sudo[126480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:36 compute-1 python3.9[126482]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:51:36 compute-1 sudo[126480]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:36 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:36 compute-1 sudo[126563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:51:36 compute-1 sudo[126563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:51:36 compute-1 sudo[126563]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:36.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:36 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:51:37 compute-1 sudo[126661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqvpehjplcpcgvnmqvzxspoloytadprp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101096.3171308-705-183001204966469/AnsiballZ_blockinfile.py'
Dec 07 09:51:37 compute-1 sudo[126661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:37 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454002290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:37 compute-1 python3.9[126663]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:37 compute-1 sudo[126661]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:37 compute-1 sudo[126688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:51:37 compute-1 sudo[126688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:51:37 compute-1 sudo[126688]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:37.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:37 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454002290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:37 compute-1 sudo[126838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rupnkugmrmucwxcrmrymlvcllxwiiyra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101097.5711212-732-82502877311507/AnsiballZ_command.py'
Dec 07 09:51:37 compute-1 sudo[126838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:38 compute-1 python3.9[126840]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:51:38 compute-1 sudo[126838]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:38 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:38.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:38 compute-1 sudo[126992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdsylrakqgtiwvzqkoahdovupfefzeif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101098.4484744-756-25867808449499/AnsiballZ_stat.py'
Dec 07 09:51:38 compute-1 sudo[126992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:39 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:39 compute-1 python3.9[126994]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:51:39 compute-1 sudo[126992]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095139 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:51:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:39.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:39 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:51:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:51:39 compute-1 ceph-mon[80077]: pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 07 09:51:39 compute-1 sudo[127146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghbnbuyazocwnvrrzetxqbhwbhscnonr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101099.3621092-780-178217509234409/AnsiballZ_command.py'
Dec 07 09:51:39 compute-1 sudo[127146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:39 compute-1 python3.9[127148]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:51:39 compute-1 sudo[127146]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:51:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:40 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:40 compute-1 sudo[127302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjgawajbhqhmkvxxpryapcqpglzxbgqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101100.1705816-804-32863556354403/AnsiballZ_file.py'
Dec 07 09:51:40 compute-1 sudo[127302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:40.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:40 compute-1 python3.9[127304]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:40 compute-1 sudo[127302]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:41 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:41 compute-1 ceph-mon[80077]: pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 07 09:51:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:41.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:41 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8454002290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:42 compute-1 python3.9[127454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:51:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:42 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:42 compute-1 ceph-mon[80077]: pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 07 09:51:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:51:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:42.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:43 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:43 compute-1 sudo[127606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnreulspvfxrbeotaxfksmyvgibcbqdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101103.0860531-924-252406876390376/AnsiballZ_command.py'
Dec 07 09:51:43 compute-1 sudo[127606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:43.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:43 compute-1 ceph-mon[80077]: pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 07 09:51:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:43 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:43 compute-1 python3.9[127608]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:51:43 compute-1 ovs-vsctl[127609]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 07 09:51:43 compute-1 sudo[127606]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:44 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:44 compute-1 sudo[127760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doysitgwlzwvvsdjrvjueldibelbzbeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101103.9733987-951-64361606264699/AnsiballZ_command.py'
Dec 07 09:51:44 compute-1 sudo[127760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:44 compute-1 python3.9[127762]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:51:44 compute-1 sudo[127760]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:44.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:51:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:45 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:45 compute-1 sudo[127915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uurbtmhaxjzloevizrchhwrgdqzrvftn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101104.819094-975-134410001188493/AnsiballZ_command.py'
Dec 07 09:51:45 compute-1 sudo[127915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:45 compute-1 python3.9[127917]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:51:45 compute-1 ovs-vsctl[127918]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 07 09:51:45 compute-1 sudo[127915]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:51:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:45.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:51:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:45 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8444004050 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:45 compute-1 ceph-mon[80077]: pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 07 09:51:46 compute-1 python3.9[128068]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:51:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:46 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:46.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:46 compute-1 sudo[128221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skzpmoevjbtlsvssrklxngzqirwzdyhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101106.4728768-1026-217445308546852/AnsiballZ_file.py'
Dec 07 09:51:46 compute-1 sudo[128221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:46 compute-1 python3.9[128223]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:51:46 compute-1 sudo[128221]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:47 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:47.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:47 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:47 compute-1 sudo[128373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbbowjkmevgnlyonhkmiojhxdnqzrbog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101107.5254025-1050-30599761911240/AnsiballZ_stat.py'
Dec 07 09:51:47 compute-1 sudo[128373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:47 compute-1 ceph-mon[80077]: pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:51:48 compute-1 python3.9[128375]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:48 compute-1 sudo[128373]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:48 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:48 compute-1 sudo[128454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbqeybipspgdgxcblfytamncisdyukla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101107.5254025-1050-30599761911240/AnsiballZ_file.py'
Dec 07 09:51:48 compute-1 sudo[128454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:48 compute-1 python3.9[128456]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:51:48 compute-1 sudo[128454]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:48.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:48 compute-1 sudo[128606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qabitzxiswuyadwvwaizomcritmzcwzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101108.6489706-1050-128019105084611/AnsiballZ_stat.py'
Dec 07 09:51:48 compute-1 sudo[128606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:49 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:49 compute-1 python3.9[128608]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:49 compute-1 sudo[128606]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:49 compute-1 sudo[128684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqnbsxhgejldeyehajnowvnsybbbxgma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101108.6489706-1050-128019105084611/AnsiballZ_file.py'
Dec 07 09:51:49 compute-1 sudo[128684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:49.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:49 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:49 compute-1 python3.9[128686]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:51:49 compute-1 sudo[128684]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:51:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:50 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8430000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 07 09:51:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:50.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 07 09:51:50 compute-1 ceph-mon[80077]: pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:51:50 compute-1 sudo[128837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqdfroigzepbjhghnpdangmrvskaqiha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101110.1830983-1119-192311851390147/AnsiballZ_file.py'
Dec 07 09:51:50 compute-1 sudo[128837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:51 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:51 compute-1 python3.9[128839]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:51 compute-1 sudo[128837]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:51.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:51 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:51 compute-1 sudo[128989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaqlzeumdhnorwhnjkfvmwobtrophwzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101111.3156326-1143-55914452001230/AnsiballZ_stat.py'
Dec 07 09:51:51 compute-1 sudo[128989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:51 compute-1 python3.9[128991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:51 compute-1 ceph-mon[80077]: pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:51:51 compute-1 sudo[128989]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:52 compute-1 sudo[129068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbkwuasstgyjtnanwkibhqnkkawdknna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101111.3156326-1143-55914452001230/AnsiballZ_file.py'
Dec 07 09:51:52 compute-1 sudo[129068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:52 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:52 compute-1 python3.9[129070]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:52 compute-1 sudo[129068]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:52.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:52 compute-1 sudo[129220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyasxodqyufcrwckwxxikisifnyrcrtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101112.666466-1179-34795045426967/AnsiballZ_stat.py'
Dec 07 09:51:52 compute-1 sudo[129220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:53 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84300016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:53 compute-1 python3.9[129222]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:53 compute-1 sudo[129220]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:53 compute-1 sudo[129298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmzadpmnrfyrsmrslsgklehekjwootll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101112.666466-1179-34795045426967/AnsiballZ_file.py'
Dec 07 09:51:53 compute-1 sudo[129298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:53.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:53 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:53 compute-1 python3.9[129300]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:53 compute-1 sudo[129298]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:53 compute-1 ceph-mon[80077]: pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:51:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:54 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:54 compute-1 sudo[129452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjbywkxrnldmgnepvgfozyjlyrsprgln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101113.9237776-1215-22197482216556/AnsiballZ_systemd.py'
Dec 07 09:51:54 compute-1 sudo[129452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:54 compute-1 python3.9[129454]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:51:54 compute-1 systemd[1]: Reloading.
Dec 07 09:51:54 compute-1 systemd-rc-local-generator[129482]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:51:54 compute-1 systemd-sysv-generator[129485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:51:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:54.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:54 compute-1 sshd-session[129451]: Connection closed by authenticating user root 104.248.193.130 port 38610 [preauth]
Dec 07 09:51:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:51:55 compute-1 sudo[129452]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:55 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:51:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:55.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:51:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:55 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84300016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:55 compute-1 sudo[129641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrrzhmbpycajrknfiavtswovtngeyuwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101115.4424655-1239-139847517863407/AnsiballZ_stat.py'
Dec 07 09:51:55 compute-1 sudo[129641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:55 compute-1 ceph-mon[80077]: pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:51:55 compute-1 python3.9[129643]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:55 compute-1 sudo[129641]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:56 compute-1 sudo[129720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruagvmjfvsncknkxdtiliqqupbhmxucj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101115.4424655-1239-139847517863407/AnsiballZ_file.py'
Dec 07 09:51:56 compute-1 sudo[129720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:56 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:56 compute-1 python3.9[129722]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:56 compute-1 sudo[129720]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:56.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:57 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:57 compute-1 sudo[129872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctrsitcvmsejbtorricmpndkfhgomtch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101116.894828-1275-257482738001587/AnsiballZ_stat.py'
Dec 07 09:51:57 compute-1 sudo[129872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:57 compute-1 python3.9[129874]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:51:57 compute-1 sudo[129872]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:57.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:57 compute-1 sudo[129900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:51:57 compute-1 sudo[129900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:51:57 compute-1 sudo[129900]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:57 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:57 compute-1 sudo[129975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoophgctlfbmknfrgsynjvkgrpzmupmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101116.894828-1275-257482738001587/AnsiballZ_file.py'
Dec 07 09:51:57 compute-1 sudo[129975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:57 compute-1 python3.9[129977]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:51:57 compute-1 sudo[129975]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:57 compute-1 ceph-mon[80077]: pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:51:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:51:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:58 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f84300016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:58 compute-1 sudo[130128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phodcsjbqwrcodnkmxmjuugnrwqgxxjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101118.1553538-1311-148298761308614/AnsiballZ_systemd.py'
Dec 07 09:51:58 compute-1 sudo[130128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:51:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:51:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:51:58.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:51:58 compute-1 python3.9[130130]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:51:58 compute-1 systemd[1]: Reloading.
Dec 07 09:51:59 compute-1 systemd-rc-local-generator[130158]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:51:59 compute-1 systemd-sysv-generator[130161]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:51:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:59 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8438003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:59 compute-1 systemd[1]: Starting Create netns directory...
Dec 07 09:51:59 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 07 09:51:59 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 07 09:51:59 compute-1 systemd[1]: Finished Create netns directory.
Dec 07 09:51:59 compute-1 sudo[130128]: pam_unix(sudo:session): session closed for user root
Dec 07 09:51:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:51:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:51:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:51:59.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:51:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:51:59 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f843c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:51:59 compute-1 ceph-mon[80077]: pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:52:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:52:00 compute-1 sudo[130322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpawvtdyquotvzrzzepyelysfhjbocgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101119.810041-1341-170274072748279/AnsiballZ_file.py'
Dec 07 09:52:00 compute-1 sudo[130322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:52:00 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:00 compute-1 python3.9[130324]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:00 compute-1 sudo[130322]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:00.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:00 compute-1 sudo[130474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-addssbskxljuwtawyxftiynzqirowguu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101120.575679-1365-87874892454526/AnsiballZ_stat.py'
Dec 07 09:52:00 compute-1 sudo[130474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:01 compute-1 python3.9[130476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:52:01 compute-1 kernel: ganesha.nfsd[128446]: segfault at 50 ip 00007f8516f6b32e sp 00007f84d6ffc210 error 4 in libntirpc.so.5.8[7f8516f50000+2c000] likely on CPU 5 (core 0, socket 5)
Dec 07 09:52:01 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 09:52:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[122154]: 07/12/2025 09:52:01 : epoch 69354e05 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f842c002b10 fd 48 proxy ignored for local
Dec 07 09:52:01 compute-1 sudo[130474]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:01 compute-1 systemd[1]: Started Process Core Dump (PID 130477/UID 0).
Dec 07 09:52:01 compute-1 sudo[130599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mypxygkhvdbdqckwtqsvsgjyabfbizlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101120.575679-1365-87874892454526/AnsiballZ_copy.py'
Dec 07 09:52:01 compute-1 sudo[130599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:01.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:01 compute-1 python3.9[130601]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101120.575679-1365-87874892454526/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:01 compute-1 sudo[130599]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:01 compute-1 ceph-mon[80077]: pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:52:02 compute-1 systemd-coredump[130478]: Process 122158 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 56:
                                                    #0  0x00007f8516f6b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 09:52:02 compute-1 systemd[1]: systemd-coredump@2-130477-0.service: Deactivated successfully.
Dec 07 09:52:02 compute-1 systemd[1]: systemd-coredump@2-130477-0.service: Consumed 1.155s CPU time.
Dec 07 09:52:02 compute-1 podman[130688]: 2025-12-07 09:52:02.392701018 +0000 UTC m=+0.043094912 container died 53a7ed53da57fa4b431c918cb8dbcdc8378be7a2d2553973f58796e64b8ee5bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 09:52:02 compute-1 systemd[1]: var-lib-containers-storage-overlay-6bfbe1841630305909283f95b5e300cd0f5f1fd7c498a548e71a02c827061dd1-merged.mount: Deactivated successfully.
Dec 07 09:52:02 compute-1 podman[130688]: 2025-12-07 09:52:02.433897546 +0000 UTC m=+0.084291400 container remove 53a7ed53da57fa4b431c918cb8dbcdc8378be7a2d2553973f58796e64b8ee5bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 07 09:52:02 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 09:52:02 compute-1 sudo[130784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poqwdllolrwlilletnccxcpikhcaexuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101122.2520735-1416-226240675454417/AnsiballZ_file.py'
Dec 07 09:52:02 compute-1 sudo[130784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:02 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 09:52:02 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.671s CPU time.
Dec 07 09:52:02 compute-1 python3.9[130792]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:02 compute-1 sudo[130784]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:02.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:03 compute-1 sudo[130950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtgzzfxodtquciyfsgovvyigekinvmjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101123.1789606-1440-7779619591640/AnsiballZ_stat.py'
Dec 07 09:52:03 compute-1 sudo[130950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:03.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:03 compute-1 python3.9[130952]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:52:03 compute-1 sudo[130950]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:03 compute-1 ceph-mon[80077]: pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:52:04 compute-1 sudo[131074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huidnyzrpwxczyjkiksmtsdznuvdrons ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101123.1789606-1440-7779619591640/AnsiballZ_copy.py'
Dec 07 09:52:04 compute-1 sudo[131074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:04 compute-1 python3.9[131076]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101123.1789606-1440-7779619591640/.source.json _original_basename=.l5n1fysp follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:52:04 compute-1 sudo[131074]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:04.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:04 compute-1 sudo[131226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhfcfvgztbzfrnqctwcitpugmspuzgpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101124.620734-1485-217126137977822/AnsiballZ_file.py'
Dec 07 09:52:04 compute-1 sudo[131226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:52:05 compute-1 python3.9[131228]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:52:05 compute-1 sudo[131226]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:05.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:05 compute-1 sudo[131378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxlpxvrpemmyfhyveuebfqegmngxrvsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101125.5245268-1509-26237243780818/AnsiballZ_stat.py'
Dec 07 09:52:05 compute-1 sudo[131378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:06 compute-1 sudo[131378]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:06 compute-1 ceph-mon[80077]: pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:52:06 compute-1 sudo[131502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koavsvyaxpeuhfatrjeetihayfnjsxdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101125.5245268-1509-26237243780818/AnsiballZ_copy.py'
Dec 07 09:52:06 compute-1 sudo[131502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:06 compute-1 sudo[131502]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:06.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095207 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:52:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:07.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:07 compute-1 sudo[131654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnevllbslytndfmokuvcqyglosphhklc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101127.4797332-1560-2625614216753/AnsiballZ_container_config_data.py'
Dec 07 09:52:07 compute-1 sudo[131654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:08 compute-1 python3.9[131656]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 07 09:52:08 compute-1 sudo[131654]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:08 compute-1 ceph-mon[80077]: pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:52:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:08.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:08 compute-1 sudo[131807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgjradjucthkfjtjesfylktygwgrhqhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101128.490635-1587-175774006985398/AnsiballZ_container_config_hash.py'
Dec 07 09:52:08 compute-1 sudo[131807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:09 compute-1 python3.9[131809]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 07 09:52:09 compute-1 sudo[131807]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:09.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:09 compute-1 sudo[131959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raraazmnhhyyepthopjfwndlmepiszvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101129.506787-1614-214159583587141/AnsiballZ_podman_container_info.py'
Dec 07 09:52:09 compute-1 sudo[131959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:52:10 compute-1 python3.9[131961]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 07 09:52:10 compute-1 ceph-mon[80077]: pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:52:10 compute-1 sudo[131959]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 07 09:52:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:10.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 07 09:52:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:11.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:11 compute-1 sudo[132138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajwmqloxzsmopftzfpbllasvjytdancb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765101131.4134839-1653-114426367148978/AnsiballZ_edpm_container_manage.py'
Dec 07 09:52:11 compute-1 sudo[132138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:12 compute-1 ceph-mon[80077]: pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:52:12 compute-1 python3[132140]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 07 09:52:12 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 3.
Dec 07 09:52:12 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:52:12 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.671s CPU time.
Dec 07 09:52:12 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:52:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:12.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:13 compute-1 podman[132233]: 2025-12-07 09:52:13.003706958 +0000 UTC m=+0.047767478 container create b1cbcccabcfab4eb86ca5186d4177421719b3f33324e34ddb36823937a7c9516 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:52:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/822d98ef02258df8302a01f8da112fc3231cbd2ed65f159e788ed5231f43cd7f/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 09:52:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/822d98ef02258df8302a01f8da112fc3231cbd2ed65f159e788ed5231f43cd7f/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:52:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/822d98ef02258df8302a01f8da112fc3231cbd2ed65f159e788ed5231f43cd7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:52:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/822d98ef02258df8302a01f8da112fc3231cbd2ed65f159e788ed5231f43cd7f/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:52:13 compute-1 podman[132233]: 2025-12-07 09:52:12.980223621 +0000 UTC m=+0.024284161 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:52:13 compute-1 podman[132233]: 2025-12-07 09:52:13.090215188 +0000 UTC m=+0.134275708 container init b1cbcccabcfab4eb86ca5186d4177421719b3f33324e34ddb36823937a7c9516 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 09:52:13 compute-1 podman[132233]: 2025-12-07 09:52:13.096733656 +0000 UTC m=+0.140794186 container start b1cbcccabcfab4eb86ca5186d4177421719b3f33324e34ddb36823937a7c9516 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 07 09:52:13 compute-1 bash[132233]: b1cbcccabcfab4eb86ca5186d4177421719b3f33324e34ddb36823937a7c9516
Dec 07 09:52:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:13 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 09:52:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:13 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 09:52:13 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:52:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:13 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 09:52:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:13 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 09:52:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:13 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 09:52:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:13 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 09:52:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:13 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 09:52:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:13 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:52:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:52:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:13.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:14 compute-1 ceph-mon[80077]: pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:52:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:14.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:52:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:15.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:15 compute-1 ceph-mon[80077]: pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:52:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:16.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:17 compute-1 podman[132156]: 2025-12-07 09:52:17.334804897 +0000 UTC m=+5.060091112 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec 07 09:52:17 compute-1 podman[132379]: 2025-12-07 09:52:17.49362993 +0000 UTC m=+0.060095783 container create 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 07 09:52:17 compute-1 podman[132379]: 2025-12-07 09:52:17.470854922 +0000 UTC m=+0.037320805 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec 07 09:52:17 compute-1 python3[132140]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c
Dec 07 09:52:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:17.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:17 compute-1 sudo[132403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:52:17 compute-1 sudo[132403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:52:17 compute-1 sudo[132403]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:17 compute-1 sudo[132138]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:17 compute-1 ceph-mon[80077]: pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:52:18 compute-1 sudo[132594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfwjzbgymprdhzpmeteicykuuuengpeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101137.9094782-1677-227632878523762/AnsiballZ_stat.py'
Dec 07 09:52:18 compute-1 sudo[132594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:18 compute-1 python3.9[132596]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:52:18 compute-1 sudo[132594]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:18.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:19 compute-1 sudo[132748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcvchzdtahkafmgmqdqclxsxgyuialzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101138.8950822-1704-21181603757824/AnsiballZ_file.py'
Dec 07 09:52:19 compute-1 sudo[132748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:19 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:52:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:19 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:52:19 compute-1 python3.9[132750]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:52:19 compute-1 sudo[132748]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:19.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:19 compute-1 sudo[132824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfssreziixohmflscipbbjbjlhmvxwka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101138.8950822-1704-21181603757824/AnsiballZ_stat.py'
Dec 07 09:52:19 compute-1 sudo[132824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:19 compute-1 python3.9[132826]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:52:19 compute-1 sudo[132824]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:19 compute-1 ceph-mon[80077]: pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 07 09:52:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:52:20 compute-1 sudo[132976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgnerxxaezygmunuvcppbgrusrfqqygv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101139.9313807-1704-233811853494271/AnsiballZ_copy.py'
Dec 07 09:52:20 compute-1 sudo[132976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:20 compute-1 python3.9[132978]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765101139.9313807-1704-233811853494271/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:52:20 compute-1 sudo[132976]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:20.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:20 compute-1 sudo[133052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bybhwibysuafifvtejrubwuwyczngnol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101139.9313807-1704-233811853494271/AnsiballZ_systemd.py'
Dec 07 09:52:20 compute-1 sudo[133052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:21 compute-1 python3.9[133054]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 07 09:52:21 compute-1 systemd[1]: Reloading.
Dec 07 09:52:21 compute-1 systemd-rc-local-generator[133076]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:52:21 compute-1 systemd-sysv-generator[133081]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:52:21 compute-1 sudo[133052]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:21.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:21 compute-1 sudo[133162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvvfbaflrlsmutjjtxpefemjvcqkzdwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101139.9313807-1704-233811853494271/AnsiballZ_systemd.py'
Dec 07 09:52:21 compute-1 sudo[133162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:22 compute-1 ceph-mon[80077]: pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 09:52:22 compute-1 python3.9[133164]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:52:22 compute-1 systemd[1]: Reloading.
Dec 07 09:52:22 compute-1 systemd-sysv-generator[133198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:52:22 compute-1 systemd-rc-local-generator[133193]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:52:22 compute-1 systemd[1]: Starting ovn_controller container...
Dec 07 09:52:22 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:52:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05186672e7666756ef2899574431c986d0d48ca7f31636ec02a045a72218218/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 07 09:52:22 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8.
Dec 07 09:52:22 compute-1 podman[133206]: 2025-12-07 09:52:22.689371857 +0000 UTC m=+0.217178381 container init 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 07 09:52:22 compute-1 ovn_controller[133221]: + sudo -E kolla_set_configs
Dec 07 09:52:22 compute-1 podman[133206]: 2025-12-07 09:52:22.716061881 +0000 UTC m=+0.243868345 container start 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 07 09:52:22 compute-1 edpm-start-podman-container[133206]: ovn_controller
Dec 07 09:52:22 compute-1 systemd[1]: Created slice User Slice of UID 0.
Dec 07 09:52:22 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 07 09:52:22 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 07 09:52:22 compute-1 systemd[1]: Starting User Manager for UID 0...
Dec 07 09:52:22 compute-1 edpm-start-podman-container[133205]: Creating additional drop-in dependency for "ovn_controller" (8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8)
Dec 07 09:52:22 compute-1 podman[133227]: 2025-12-07 09:52:22.792724305 +0000 UTC m=+0.068446981 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 07 09:52:22 compute-1 systemd[133259]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 07 09:52:22 compute-1 systemd[1]: 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8-471c629939103796.service: Main process exited, code=exited, status=1/FAILURE
Dec 07 09:52:22 compute-1 systemd[1]: 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8-471c629939103796.service: Failed with result 'exit-code'.
Dec 07 09:52:22 compute-1 systemd[1]: Reloading.
Dec 07 09:52:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:22.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:22 compute-1 systemd-rc-local-generator[133308]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:52:22 compute-1 systemd-sysv-generator[133314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:52:22 compute-1 systemd[133259]: Queued start job for default target Main User Target.
Dec 07 09:52:22 compute-1 systemd[133259]: Created slice User Application Slice.
Dec 07 09:52:22 compute-1 systemd[133259]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 07 09:52:22 compute-1 systemd[133259]: Started Daily Cleanup of User's Temporary Directories.
Dec 07 09:52:22 compute-1 systemd[133259]: Reached target Paths.
Dec 07 09:52:22 compute-1 systemd[133259]: Reached target Timers.
Dec 07 09:52:22 compute-1 systemd[133259]: Starting D-Bus User Message Bus Socket...
Dec 07 09:52:22 compute-1 systemd[133259]: Starting Create User's Volatile Files and Directories...
Dec 07 09:52:22 compute-1 systemd[133259]: Finished Create User's Volatile Files and Directories.
Dec 07 09:52:22 compute-1 systemd[133259]: Listening on D-Bus User Message Bus Socket.
Dec 07 09:52:22 compute-1 systemd[133259]: Reached target Sockets.
Dec 07 09:52:22 compute-1 systemd[133259]: Reached target Basic System.
Dec 07 09:52:22 compute-1 systemd[133259]: Reached target Main User Target.
Dec 07 09:52:22 compute-1 systemd[133259]: Startup finished in 130ms.
Dec 07 09:52:23 compute-1 systemd[1]: Started User Manager for UID 0.
Dec 07 09:52:23 compute-1 systemd[1]: Started ovn_controller container.
Dec 07 09:52:23 compute-1 systemd[1]: Started Session c1 of User root.
Dec 07 09:52:23 compute-1 sudo[133162]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:23 compute-1 ovn_controller[133221]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 07 09:52:23 compute-1 ovn_controller[133221]: INFO:__main__:Validating config file
Dec 07 09:52:23 compute-1 ovn_controller[133221]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 07 09:52:23 compute-1 ovn_controller[133221]: INFO:__main__:Writing out command to execute
Dec 07 09:52:23 compute-1 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 07 09:52:23 compute-1 ovn_controller[133221]: ++ cat /run_command
Dec 07 09:52:23 compute-1 ovn_controller[133221]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 07 09:52:23 compute-1 ovn_controller[133221]: + ARGS=
Dec 07 09:52:23 compute-1 ovn_controller[133221]: + sudo kolla_copy_cacerts
Dec 07 09:52:23 compute-1 systemd[1]: Started Session c2 of User root.
Dec 07 09:52:23 compute-1 ovn_controller[133221]: + [[ ! -n '' ]]
Dec 07 09:52:23 compute-1 ovn_controller[133221]: + . kolla_extend_start
Dec 07 09:52:23 compute-1 ovn_controller[133221]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 07 09:52:23 compute-1 ovn_controller[133221]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 07 09:52:23 compute-1 ovn_controller[133221]: + umask 0022
Dec 07 09:52:23 compute-1 ovn_controller[133221]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 07 09:52:23 compute-1 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 07 09:52:23 compute-1 NetworkManager[48950]: <info>  [1765101143.2342] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 07 09:52:23 compute-1 NetworkManager[48950]: <info>  [1765101143.2349] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 07 09:52:23 compute-1 NetworkManager[48950]: <info>  [1765101143.2360] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 07 09:52:23 compute-1 NetworkManager[48950]: <info>  [1765101143.2365] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 07 09:52:23 compute-1 NetworkManager[48950]: <info>  [1765101143.2368] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 07 09:52:23 compute-1 kernel: br-int: entered promiscuous mode
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 07 09:52:23 compute-1 ovn_controller[133221]: 2025-12-07T09:52:23Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 07 09:52:23 compute-1 NetworkManager[48950]: <info>  [1765101143.2612] manager: (ovn-0e65d7-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 07 09:52:23 compute-1 systemd-udevd[133356]: Network interface NamePolicy= disabled on kernel command line.
Dec 07 09:52:23 compute-1 kernel: genev_sys_6081: entered promiscuous mode
Dec 07 09:52:23 compute-1 systemd-udevd[133357]: Network interface NamePolicy= disabled on kernel command line.
Dec 07 09:52:23 compute-1 NetworkManager[48950]: <info>  [1765101143.2807] device (genev_sys_6081): carrier: link connected
Dec 07 09:52:23 compute-1 NetworkManager[48950]: <info>  [1765101143.2810] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 07 09:52:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:23.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:23 compute-1 ceph-mon[80077]: pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 09:52:23 compute-1 NetworkManager[48950]: <info>  [1765101143.9730] manager: (ovn-cbaa5e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec 07 09:52:24 compute-1 NetworkManager[48950]: <info>  [1765101144.4223] manager: (ovn-8da812-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Dec 07 09:52:24 compute-1 sudo[133487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddwmxfuphicrprrkouzvoephzvzelmoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101144.556253-1788-260906232327733/AnsiballZ_command.py'
Dec 07 09:52:24 compute-1 sudo[133487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:24.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:25 compute-1 python3.9[133489]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:52:25 compute-1 ovs-vsctl[133490]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 07 09:52:25 compute-1 sudo[133487]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:52:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:25.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:25 compute-1 sudo[133640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eocyxugvcjlbdjsebwiyvsedzwssvscz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101145.3509276-1812-215160578525846/AnsiballZ_command.py'
Dec 07 09:52:25 compute-1 sudo[133640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:25 compute-1 ceph-mon[80077]: pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:52:25 compute-1 python3.9[133642]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:52:25 compute-1 ovs-vsctl[133644]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 07 09:52:25 compute-1 sudo[133640]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:52:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:26 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd088000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:26.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:27 compute-1 sudo[133810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjjodzaxoifxyhxnnorigptwdxawcfnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101146.7603605-1854-173636251553276/AnsiballZ_command.py'
Dec 07 09:52:27 compute-1 sudo[133810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:27 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:27 compute-1 python3.9[133812]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:52:27 compute-1 ovs-vsctl[133813]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 07 09:52:27 compute-1 sudo[133810]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:52:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:27.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:52:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:27 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd064000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:27 compute-1 sshd-session[122200]: Connection closed by 192.168.122.30 port 52942
Dec 07 09:52:27 compute-1 sshd-session[122196]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:52:27 compute-1 systemd[1]: session-49.scope: Deactivated successfully.
Dec 07 09:52:27 compute-1 systemd[1]: session-49.scope: Consumed 1min 921ms CPU time.
Dec 07 09:52:27 compute-1 systemd-logind[796]: Session 49 logged out. Waiting for processes to exit.
Dec 07 09:52:27 compute-1 systemd-logind[796]: Removed session 49.
Dec 07 09:52:27 compute-1 ceph-mon[80077]: pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 09:52:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:52:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:28 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd084001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:28.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095229 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:52:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:29 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:29.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:29 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:30 compute-1 ceph-mon[80077]: pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 09:52:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:52:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:30 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:30.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:31 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0840023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:31.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:31 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:32 compute-1 ceph-mon[80077]: pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 852 B/s wr, 3 op/s
Dec 07 09:52:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:32 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:32.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:32 compute-1 sshd-session[133841]: Accepted publickey for zuul from 192.168.122.30 port 56748 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:52:32 compute-1 systemd-logind[796]: New session 51 of user zuul.
Dec 07 09:52:32 compute-1 systemd[1]: Started Session 51 of User zuul.
Dec 07 09:52:32 compute-1 sshd-session[133841]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:52:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:33 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:33 compute-1 systemd[1]: Stopping User Manager for UID 0...
Dec 07 09:52:33 compute-1 systemd[133259]: Activating special unit Exit the Session...
Dec 07 09:52:33 compute-1 systemd[133259]: Stopped target Main User Target.
Dec 07 09:52:33 compute-1 systemd[133259]: Stopped target Basic System.
Dec 07 09:52:33 compute-1 systemd[133259]: Stopped target Paths.
Dec 07 09:52:33 compute-1 systemd[133259]: Stopped target Sockets.
Dec 07 09:52:33 compute-1 systemd[133259]: Stopped target Timers.
Dec 07 09:52:33 compute-1 systemd[133259]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 07 09:52:33 compute-1 systemd[133259]: Closed D-Bus User Message Bus Socket.
Dec 07 09:52:33 compute-1 systemd[133259]: Stopped Create User's Volatile Files and Directories.
Dec 07 09:52:33 compute-1 systemd[133259]: Removed slice User Application Slice.
Dec 07 09:52:33 compute-1 systemd[133259]: Reached target Shutdown.
Dec 07 09:52:33 compute-1 systemd[133259]: Finished Exit the Session.
Dec 07 09:52:33 compute-1 systemd[133259]: Reached target Exit the Session.
Dec 07 09:52:33 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Dec 07 09:52:33 compute-1 systemd[1]: Stopped User Manager for UID 0.
Dec 07 09:52:33 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 07 09:52:33 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 07 09:52:33 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 07 09:52:33 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 07 09:52:33 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Dec 07 09:52:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:33.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:33 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0840023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:33 compute-1 python3.9[133997]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:52:34 compute-1 ceph-mon[80077]: pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:52:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:34 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:34.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:52:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:35 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0640016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:35 compute-1 sudo[134152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmdiivahjclmqixrnxzsxswvajoxqrqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101154.617246-63-257908988766794/AnsiballZ_file.py'
Dec 07 09:52:35 compute-1 sudo[134152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:35 compute-1 python3.9[134154]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:35 compute-1 sudo[134152]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:35.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:35 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:35 compute-1 sudo[134304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axckmsiwbrjexpjkmqkutevqrryrwpyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101155.4711006-63-128981622788506/AnsiballZ_file.py'
Dec 07 09:52:35 compute-1 sudo[134304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:35 compute-1 python3.9[134306]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:35 compute-1 sudo[134304]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:36 compute-1 ceph-mon[80077]: pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:52:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:36 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0840023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:36 compute-1 sudo[134457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfxjpejjojrsygkvssvmbivcgygszixt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101156.1129987-63-278608338556122/AnsiballZ_file.py'
Dec 07 09:52:36 compute-1 sudo[134457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:36 compute-1 python3.9[134459]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:36 compute-1 sudo[134457]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:36 compute-1 sudo[134460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:52:36 compute-1 sudo[134460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:52:36 compute-1 sudo[134460]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:36 compute-1 sudo[134485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:52:36 compute-1 sudo[134485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:52:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:36.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:37 compute-1 sudo[134673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koobtsxrabumvjswouxzlgqjdholfphb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101156.7973409-63-163721538975949/AnsiballZ_file.py'
Dec 07 09:52:37 compute-1 sudo[134673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:37 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:37 compute-1 sudo[134485]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:37 compute-1 python3.9[134676]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:37 compute-1 sudo[134673]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:37.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:37 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:37 compute-1 sudo[134842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pajjhwyuvdrmjdtuxxbrukwbbtfuldxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101157.4137306-63-145119135103040/AnsiballZ_file.py'
Dec 07 09:52:37 compute-1 sudo[134842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:37 compute-1 sudo[134845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:52:37 compute-1 sudo[134845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:52:37 compute-1 sudo[134845]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:37 compute-1 python3.9[134844]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:37 compute-1 sudo[134842]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:38 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:38 compute-1 ceph-mon[80077]: pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:52:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:52:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:52:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:52:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:52:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:52:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:52:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:52:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:38.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:39 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0840034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:39 compute-1 python3.9[135020]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:52:39 compute-1 ceph-mon[80077]: pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:52:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:39.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:39 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd064002050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:39 compute-1 sudo[135170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpioicxxvfniviejcosoksyexphwinon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101159.4512691-195-28409827060164/AnsiballZ_seboolean.py'
Dec 07 09:52:39 compute-1 sudo[135170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:52:40 compute-1 python3.9[135172]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 07 09:52:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:40 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:40 compute-1 sudo[135170]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:40.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:41 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:41.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:41 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:41 compute-1 python3.9[135324]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:52:42 compute-1 ceph-mon[80077]: pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:52:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:42 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd064002050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:42 compute-1 python3.9[135446]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101161.1115203-219-207468369633748/.source follow=False _original_basename=haproxy.j2 checksum=cc5e97ea900947bff0c19d73b88d99840e041f49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:42 compute-1 sudo[135447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:52:42 compute-1 sudo[135447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:52:42 compute-1 sudo[135447]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:42.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:43 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:43 compute-1 python3.9[135621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:52:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:52:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:52:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:52:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:43.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:43 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:43 compute-1 python3.9[135742]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101162.6838818-264-249380739979222/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:44 compute-1 ceph-mon[80077]: pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:52:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:44 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:44 compute-1 sudo[135895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqqkllqojedurqflyzfknrudiqcnecav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101164.2331753-315-48118733730373/AnsiballZ_setup.py'
Dec 07 09:52:44 compute-1 sudo[135895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:44 compute-1 sshd-session[135767]: Connection closed by authenticating user root 104.248.193.130 port 55326 [preauth]
Dec 07 09:52:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:44 compute-1 python3.9[135897]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:52:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:52:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:45 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd064002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:45 compute-1 sudo[135895]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:45 compute-1 ceph-mon[80077]: pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:52:45 compute-1 sudo[135979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnatqbihjlxihcgdhdyqayuleznufsas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101164.2331753-315-48118733730373/AnsiballZ_dnf.py'
Dec 07 09:52:45 compute-1 sudo[135979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:45.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:45 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:45 compute-1 python3.9[135981]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:52:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:46 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:46.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:47 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:47 compute-1 ceph-mon[80077]: pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:52:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:47.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:47 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:47 compute-1 sudo[135979]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:48 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:48 compute-1 sudo[136136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bagtigwxzdfszxfmhitrvqmalfefqwib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101167.7845519-351-154137784656819/AnsiballZ_systemd.py'
Dec 07 09:52:48 compute-1 sudo[136136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:48 compute-1 python3.9[136138]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 07 09:52:48 compute-1 sudo[136136]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:48.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:49 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd058000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:49.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:49 compute-1 python3.9[136291]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:52:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:49 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:49 compute-1 ceph-mon[80077]: pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:52:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:52:50 compute-1 python3.9[136412]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101169.066206-375-254118638067669/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:50 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:50 compute-1 python3.9[136563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:52:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:52:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:50.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:52:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:51 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd060000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:51 compute-1 python3.9[136684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101170.4497008-375-169595295972850/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:51.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:51 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:51 compute-1 ceph-mon[80077]: pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:52:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:52 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:52 compute-1 python3.9[136835]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:52:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:52.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:53 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:53 compute-1 ovn_controller[133221]: 2025-12-07T09:52:53Z|00025|memory|INFO|16256 kB peak resident set size after 30.1 seconds
Dec 07 09:52:53 compute-1 ovn_controller[133221]: 2025-12-07T09:52:53Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Dec 07 09:52:53 compute-1 podman[136930]: 2025-12-07 09:52:53.38569191 +0000 UTC m=+0.120013801 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 07 09:52:53 compute-1 python3.9[136965]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101172.3847106-507-193420977302077/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:53.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:53 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd060001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:53 compute-1 ceph-mon[80077]: pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:52:54 compute-1 python3.9[137131]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:52:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:54 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054001fe0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:54 compute-1 python3.9[137253]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101173.6881437-507-39360607776580/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:54.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:52:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:55 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:55 compute-1 python3.9[137403]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:52:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:55.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:55 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:55 compute-1 ceph-mon[80077]: pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:52:56 compute-1 sudo[137556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvniyxzjlysibyobbydvmbhjqvjxvpps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101175.9242618-621-53651497212921/AnsiballZ_file.py'
Dec 07 09:52:56 compute-1 sudo[137556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:56 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd060001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:56 compute-1 python3.9[137558]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:56 compute-1 sudo[137556]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:56.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:57 compute-1 sudo[137708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykuycmvifuvicsjkwvjiytclozdeelht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101176.756772-645-61916428258265/AnsiballZ_stat.py'
Dec 07 09:52:57 compute-1 sudo[137708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:57 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd060001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:57 compute-1 python3.9[137710]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:52:57 compute-1 sudo[137708]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:57 compute-1 sudo[137786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfhcfgnpdhcfebmtninnjqodcivpkjre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101176.756772-645-61916428258265/AnsiballZ_file.py'
Dec 07 09:52:57 compute-1 sudo[137786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:57.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:57 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:57 compute-1 python3.9[137788]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:57 compute-1 sudo[137786]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:57 compute-1 sudo[137813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:52:57 compute-1 sudo[137813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:52:57 compute-1 sudo[137813]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:57 compute-1 ceph-mon[80077]: pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:52:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:52:58 compute-1 sudo[137963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icxsbmudrmglvlrmrwmfckhfqjinloab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101177.8286273-645-202143273768321/AnsiballZ_stat.py'
Dec 07 09:52:58 compute-1 sudo[137963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:58 compute-1 python3.9[137966]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:52:58 compute-1 sudo[137963]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:58 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:58 compute-1 sudo[138042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uspemdzlbdywxvqxzzvcubsnznhjwpdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101177.8286273-645-202143273768321/AnsiballZ_file.py'
Dec 07 09:52:58 compute-1 sudo[138042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:58 compute-1 python3.9[138044]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:52:58 compute-1 sudo[138042]: pam_unix(sudo:session): session closed for user root
Dec 07 09:52:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:52:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:52:58.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:52:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:59 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd060001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:59 compute-1 sudo[138194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnfgmtggquczmhlfscyuresyutrjxrml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101179.2571354-714-56322426125787/AnsiballZ_file.py'
Dec 07 09:52:59 compute-1 sudo[138194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:52:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:52:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:52:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:52:59.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:52:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:52:59 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:52:59 compute-1 python3.9[138196]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:52:59 compute-1 sudo[138194]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:53:00 compute-1 ceph-mon[80077]: pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:53:00 compute-1 sudo[138347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lixikyiiqdybxyzhchxcwjoflopzqlbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101180.0276966-738-169249247597578/AnsiballZ_stat.py'
Dec 07 09:53:00 compute-1 sudo[138347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:00 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:00 compute-1 python3.9[138349]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:53:00 compute-1 sudo[138347]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:00 compute-1 sudo[138425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmlhszmbzqzqvvmgpryohurcactrmgoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101180.0276966-738-169249247597578/AnsiballZ_file.py'
Dec 07 09:53:00 compute-1 sudo[138425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:00.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:00 compute-1 python3.9[138427]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:53:00 compute-1 sudo[138425]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:01 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:53:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:01.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:53:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:01 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd060001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:01 compute-1 sudo[138577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqegoirnrhysevfdvxzmaflokjrlbgvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101181.4599965-774-38308451960451/AnsiballZ_stat.py'
Dec 07 09:53:01 compute-1 sudo[138577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:01 compute-1 python3.9[138579]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:53:01 compute-1 sudo[138577]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:02 compute-1 ceph-mon[80077]: pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:53:02 compute-1 sudo[138656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlnbckgldotbtevmbtusfiwgkwvlgfzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101181.4599965-774-38308451960451/AnsiballZ_file.py'
Dec 07 09:53:02 compute-1 sudo[138656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:02 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:02 compute-1 python3.9[138658]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:53:02 compute-1 sudo[138656]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:02.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:03 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:03 compute-1 sudo[138808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqoguhbizxgpfuzihuxxemezgznnnnmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101182.8299508-810-86507704785392/AnsiballZ_systemd.py'
Dec 07 09:53:03 compute-1 sudo[138808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:03 compute-1 python3.9[138810]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:53:03 compute-1 systemd[1]: Reloading.
Dec 07 09:53:03 compute-1 systemd-rc-local-generator[138833]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:53:03 compute-1 systemd-sysv-generator[138838]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:53:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:03.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:03 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:03 compute-1 sudo[138808]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:04 compute-1 ceph-mon[80077]: pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:53:04 compute-1 sudo[138998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpevhcclavpowhjnbogxthmsmgtfkigy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101184.0662062-834-193984234342756/AnsiballZ_stat.py'
Dec 07 09:53:04 compute-1 sudo[138998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:04 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd060003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:04 compute-1 python3.9[139000]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:53:04 compute-1 sudo[138998]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:04 compute-1 sudo[139076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpxjgvcnairglhlqmprwxdwnnqbslhpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101184.0662062-834-193984234342756/AnsiballZ_file.py'
Dec 07 09:53:04 compute-1 sudo[139076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:04.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:05 compute-1 python3.9[139078]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:53:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:53:05 compute-1 sudo[139076]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:05 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:05 compute-1 ceph-mon[80077]: pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:53:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:53:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:05.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:53:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:05 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:05 compute-1 sudo[139228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsqqstnsslxwkyolihjyiyxcgctueutb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101185.3955755-870-76160042776502/AnsiballZ_stat.py'
Dec 07 09:53:05 compute-1 sudo[139228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:05 compute-1 python3.9[139230]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:53:05 compute-1 sudo[139228]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:06 compute-1 sudo[139307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aluborxaoqamntmvzyqbvaliqzpypgxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101185.3955755-870-76160042776502/AnsiballZ_file.py'
Dec 07 09:53:06 compute-1 sudo[139307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:06 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:06 compute-1 python3.9[139309]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:53:06 compute-1 sudo[139307]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:06.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:06 compute-1 sudo[139459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mghsdferafctuxiyrmrnqiusimisgukj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101186.7087414-906-130538809522765/AnsiballZ_systemd.py'
Dec 07 09:53:06 compute-1 sudo[139459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:07 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd060003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:07 compute-1 python3.9[139461]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:53:07 compute-1 systemd[1]: Reloading.
Dec 07 09:53:07 compute-1 systemd-rc-local-generator[139483]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:53:07 compute-1 systemd-sysv-generator[139487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:53:07 compute-1 systemd[1]: Starting Create netns directory...
Dec 07 09:53:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:07.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:07 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 07 09:53:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:07 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:07 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 07 09:53:07 compute-1 systemd[1]: Finished Create netns directory.
Dec 07 09:53:07 compute-1 sudo[139459]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:07 compute-1 ceph-mon[80077]: pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:53:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:08 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:08 compute-1 sudo[139652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diyowrqjwivkorsbsfgyyfuypgqjuhdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101188.0987866-936-70453263295075/AnsiballZ_file.py'
Dec 07 09:53:08 compute-1 sudo[139652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:08 compute-1 python3.9[139654]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:53:08 compute-1 sudo[139652]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:08.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:09 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:09 compute-1 sudo[139804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uakrlxakokpvzxrsivznjcgmgwbnkkov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101188.89222-960-147125977303953/AnsiballZ_stat.py'
Dec 07 09:53:09 compute-1 sudo[139804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:09 compute-1 python3.9[139806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:53:09 compute-1 sudo[139804]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:09.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:09 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd060003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:09 compute-1 sudo[139927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-facgbuklyvamlvmlncjkgtxypvowinnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101188.89222-960-147125977303953/AnsiballZ_copy.py'
Dec 07 09:53:09 compute-1 sudo[139927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:09 compute-1 python3.9[139929]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101188.89222-960-147125977303953/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:53:09 compute-1 sudo[139927]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:53:10 compute-1 ceph-mon[80077]: pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:53:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:10 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:10.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:10 compute-1 sudo[140080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxkbvsybjpdbmeshoxhfpadynfxiczns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101190.7052245-1011-180594268635963/AnsiballZ_file.py'
Dec 07 09:53:10 compute-1 sudo[140080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:11 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:11 compute-1 python3.9[140082]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:53:11 compute-1 sudo[140080]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:11.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:11 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c0041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:11 compute-1 sudo[140232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpbuleyuhdtqxkzxfywkrdoakucqqkxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101191.5829113-1035-208408996473709/AnsiballZ_stat.py'
Dec 07 09:53:11 compute-1 sudo[140232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:12 compute-1 python3.9[140234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:53:12 compute-1 sudo[140232]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:12 compute-1 ceph-mon[80077]: pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:53:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:12 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd060003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:12 compute-1 sudo[140356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-illxhekvcxrvaenspnhuwxcouqtotpgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101191.5829113-1035-208408996473709/AnsiballZ_copy.py'
Dec 07 09:53:12 compute-1 sudo[140356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:12 compute-1 python3.9[140358]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101191.5829113-1035-208408996473709/.source.json _original_basename=.a4dj44f2 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:53:12 compute-1 sudo[140356]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:12.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:13 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:13 compute-1 sudo[140508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfdikmfguotqnhkqirggzpdtghcpeayl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101193.009417-1080-233742458426903/AnsiballZ_file.py'
Dec 07 09:53:13 compute-1 sudo[140508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:53:13 compute-1 python3.9[140510]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:53:13 compute-1 sudo[140508]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:13.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:13 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd060003730 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:14 compute-1 sudo[140661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfhdvyktcpaleaypcnojefmhuhobaclz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101193.9758782-1104-176524437962523/AnsiballZ_stat.py'
Dec 07 09:53:14 compute-1 sudo[140661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:14 compute-1 ceph-mon[80077]: pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:53:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:14 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:14 compute-1 sudo[140661]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:14 compute-1 sudo[140784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiovcavludcljxagmaqknhskmqccpvyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101193.9758782-1104-176524437962523/AnsiballZ_copy.py'
Dec 07 09:53:14 compute-1 sudo[140784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:53:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:14.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:53:14 compute-1 sudo[140784]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:53:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:15 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c004230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:15.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:15 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:16 compute-1 sudo[140936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tteyeydqjtiuoraxxpcbzumcvdvypdbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101195.547101-1155-264752280890451/AnsiballZ_container_config_data.py'
Dec 07 09:53:16 compute-1 sudo[140936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:16 compute-1 python3.9[140938]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 07 09:53:16 compute-1 sudo[140936]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:16 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:16.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:17 compute-1 ceph-mon[80077]: pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:53:17 compute-1 sudo[141089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyshwbyocuythgtllwozzxinrhmvrint ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101196.582401-1182-153209169098083/AnsiballZ_container_config_hash.py'
Dec 07 09:53:17 compute-1 sudo[141089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:17 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd0780037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:17 compute-1 python3.9[141091]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 07 09:53:17 compute-1 sudo[141089]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:17 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd06c004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:17.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:17 compute-1 sudo[141168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:53:17 compute-1 sudo[141168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:53:17 compute-1 sudo[141168]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:18 compute-1 sudo[141266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmttkkpnonnelytkodhuhdatkooshmkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101197.6139205-1209-84885960127490/AnsiballZ_podman_container_info.py'
Dec 07 09:53:18 compute-1 sudo[141266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:18 compute-1 ceph-mon[80077]: pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:53:18 compute-1 python3.9[141268]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 07 09:53:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:18 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:18 compute-1 sudo[141266]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:18.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:19 compute-1 kernel: ganesha.nfsd[135985]: segfault at 50 ip 00007fd135cc932e sp 00007fd0edffa210 error 4 in libntirpc.so.5.8[7fd135cae000+2c000] likely on CPU 1 (core 0, socket 1)
Dec 07 09:53:19 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 09:53:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[132249]: 07/12/2025 09:53:19 : epoch 69354e4d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd054004140 fd 39 proxy ignored for local
Dec 07 09:53:19 compute-1 systemd[1]: Started Process Core Dump (PID 141321/UID 0).
Dec 07 09:53:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:19.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:19 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Dec 07 09:53:20 compute-1 sudo[141448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhdovyswinbvrcsryeggjrwvxovbrzfv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765101199.5032225-1248-264142234737672/AnsiballZ_edpm_container_manage.py'
Dec 07 09:53:20 compute-1 sudo[141448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:53:20 compute-1 python3[141450]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 07 09:53:20 compute-1 ceph-mon[80077]: pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:53:20 compute-1 systemd-coredump[141322]: Process 132253 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 55:
                                                    #0  0x00007fd135cc932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 09:53:20 compute-1 systemd[1]: systemd-coredump@3-141321-0.service: Deactivated successfully.
Dec 07 09:53:20 compute-1 systemd[1]: systemd-coredump@3-141321-0.service: Consumed 1.296s CPU time.
Dec 07 09:53:20 compute-1 podman[141482]: 2025-12-07 09:53:20.550636139 +0000 UTC m=+0.028618919 container died b1cbcccabcfab4eb86ca5186d4177421719b3f33324e34ddb36823937a7c9516 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 07 09:53:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-822d98ef02258df8302a01f8da112fc3231cbd2ed65f159e788ed5231f43cd7f-merged.mount: Deactivated successfully.
Dec 07 09:53:20 compute-1 podman[141482]: 2025-12-07 09:53:20.615487051 +0000 UTC m=+0.093469831 container remove b1cbcccabcfab4eb86ca5186d4177421719b3f33324e34ddb36823937a7c9516 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 07 09:53:20 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 09:53:20 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 09:53:20 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.657s CPU time.
Dec 07 09:53:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:20.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:21.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:22 compute-1 ceph-mon[80077]: pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:53:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:22.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:23 compute-1 ceph-mon[80077]: pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:53:23 compute-1 podman[141573]: 2025-12-07 09:53:23.650967364 +0000 UTC m=+0.139040315 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 07 09:53:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:23.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:24.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:53:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095325 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:53:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:53:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:25.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:53:25 compute-1 ceph-mon[80077]: pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 0 B/s wr, 153 op/s
Dec 07 09:53:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:26.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:27.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:53:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:28.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:53:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:29.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:53:30 compute-1 podman[141465]: 2025-12-07 09:53:30.060239995 +0000 UTC m=+9.640290377 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 07 09:53:30 compute-1 podman[141675]: 2025-12-07 09:53:30.183367761 +0000 UTC m=+0.024425515 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 07 09:53:30 compute-1 podman[141675]: 2025-12-07 09:53:30.535138273 +0000 UTC m=+0.376196017 container create 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 07 09:53:30 compute-1 python3[141450]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3
Dec 07 09:53:30 compute-1 sudo[141448]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:30 compute-1 ceph-mon[80077]: pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 0 B/s wr, 153 op/s
Dec 07 09:53:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:53:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:30.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:30 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 4.
Dec 07 09:53:30 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:53:30 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.657s CPU time.
Dec 07 09:53:30 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:53:31 compute-1 podman[141785]: 2025-12-07 09:53:31.187684658 +0000 UTC m=+0.038422778 container create 857e999d5caa585a29da66c051b38fb7d1c5b547b61f83905a0f84100372739e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 07 09:53:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41074d06281cd31323fceb0b757527ee9df82824452eb01a9a6fdd812e7dbef4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 09:53:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41074d06281cd31323fceb0b757527ee9df82824452eb01a9a6fdd812e7dbef4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:53:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41074d06281cd31323fceb0b757527ee9df82824452eb01a9a6fdd812e7dbef4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:53:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41074d06281cd31323fceb0b757527ee9df82824452eb01a9a6fdd812e7dbef4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:53:31 compute-1 podman[141785]: 2025-12-07 09:53:31.245898684 +0000 UTC m=+0.096636824 container init 857e999d5caa585a29da66c051b38fb7d1c5b547b61f83905a0f84100372739e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 07 09:53:31 compute-1 podman[141785]: 2025-12-07 09:53:31.250498037 +0000 UTC m=+0.101236157 container start 857e999d5caa585a29da66c051b38fb7d1c5b547b61f83905a0f84100372739e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 09:53:31 compute-1 bash[141785]: 857e999d5caa585a29da66c051b38fb7d1c5b547b61f83905a0f84100372739e
Dec 07 09:53:31 compute-1 podman[141785]: 2025-12-07 09:53:31.172232923 +0000 UTC m=+0.022971063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:53:31 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:53:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:31 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 09:53:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:31 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 09:53:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:31 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 09:53:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:31 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 09:53:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:31 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 09:53:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:31 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 09:53:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:31 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 09:53:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:31 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:53:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:31.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:31 compute-1 ceph-mon[80077]: pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 0 B/s wr, 153 op/s
Dec 07 09:53:31 compute-1 ceph-mon[80077]: pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 0 B/s wr, 153 op/s
Dec 07 09:53:32 compute-1 sudo[141967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obsftzzbktzaowzqwgiidiqjmqdsymmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101211.7126484-1272-99595299689042/AnsiballZ_stat.py'
Dec 07 09:53:32 compute-1 sudo[141967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:32 compute-1 python3.9[141969]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:53:32 compute-1 sudo[141967]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:53:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:32.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:53:33 compute-1 sshd-session[141996]: Connection closed by authenticating user root 104.248.193.130 port 55554 [preauth]
Dec 07 09:53:33 compute-1 sudo[142123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbjesxpsmmapedjiguolwkbaltfcjvry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101212.8800106-1299-66154882038912/AnsiballZ_file.py'
Dec 07 09:53:33 compute-1 sudo[142123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:33 compute-1 python3.9[142125]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:53:33 compute-1 sudo[142123]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:33 compute-1 sudo[142199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebmmffjrehfkwvirsxnptebmdcjajuug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101212.8800106-1299-66154882038912/AnsiballZ_stat.py'
Dec 07 09:53:33 compute-1 sudo[142199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:33.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:33 compute-1 python3.9[142201]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:53:33 compute-1 sudo[142199]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:33 compute-1 ceph-mon[80077]: pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 0 B/s wr, 153 op/s
Dec 07 09:53:34 compute-1 sudo[142351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzyftpgmncplftnxxxyowylpvasmdbtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101213.9033098-1299-95914866813451/AnsiballZ_copy.py'
Dec 07 09:53:34 compute-1 sudo[142351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:34 compute-1 python3.9[142353]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765101213.9033098-1299-95914866813451/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:53:34 compute-1 sudo[142351]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:34 compute-1 sudo[142427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhdaqcwqdegmyzdegyttrbiaibqhzuod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101213.9033098-1299-95914866813451/AnsiballZ_systemd.py'
Dec 07 09:53:34 compute-1 sudo[142427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:53:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:34.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:53:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:53:35 compute-1 python3.9[142429]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 07 09:53:35 compute-1 systemd[1]: Reloading.
Dec 07 09:53:35 compute-1 systemd-rc-local-generator[142457]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:53:35 compute-1 systemd-sysv-generator[142460]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:53:35 compute-1 sudo[142427]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:35.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:35 compute-1 sudo[142538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akivvkomjbvuphijbnrjazwxwlkjgilr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101213.9033098-1299-95914866813451/AnsiballZ_systemd.py'
Dec 07 09:53:35 compute-1 sudo[142538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:36 compute-1 ceph-mon[80077]: pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 511 B/s wr, 154 op/s
Dec 07 09:53:36 compute-1 python3.9[142540]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:53:36 compute-1 systemd[1]: Reloading.
Dec 07 09:53:36 compute-1 systemd-rc-local-generator[142569]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:53:36 compute-1 systemd-sysv-generator[142573]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:53:36 compute-1 systemd[1]: Starting ovn_metadata_agent container...
Dec 07 09:53:36 compute-1 systemd[1]: Started libcrun container.
Dec 07 09:53:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e001d71f14ef3223e138e3145a96a71f6223e4d763319bee6fd15929eb160d03/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 07 09:53:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e001d71f14ef3223e138e3145a96a71f6223e4d763319bee6fd15929eb160d03/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 07 09:53:36 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3.
Dec 07 09:53:36 compute-1 podman[142582]: 2025-12-07 09:53:36.615324406 +0000 UTC m=+0.134036992 container init 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: + sudo -E kolla_set_configs
Dec 07 09:53:36 compute-1 podman[142582]: 2025-12-07 09:53:36.640348077 +0000 UTC m=+0.159060643 container start 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 07 09:53:36 compute-1 edpm-start-podman-container[142582]: ovn_metadata_agent
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Validating config file
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Copying service configuration files
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Writing out command to execute
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: ++ cat /run_command
Dec 07 09:53:36 compute-1 edpm-start-podman-container[142581]: Creating additional drop-in dependency for "ovn_metadata_agent" (7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3)
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: + CMD=neutron-ovn-metadata-agent
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: + ARGS=
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: + sudo kolla_copy_cacerts
Dec 07 09:53:36 compute-1 systemd[1]: Reloading.
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: + [[ ! -n '' ]]
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: + . kolla_extend_start
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: Running command: 'neutron-ovn-metadata-agent'
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: + umask 0022
Dec 07 09:53:36 compute-1 ovn_metadata_agent[142598]: + exec neutron-ovn-metadata-agent
Dec 07 09:53:36 compute-1 podman[142605]: 2025-12-07 09:53:36.729523055 +0000 UTC m=+0.075456444 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:53:36 compute-1 systemd-rc-local-generator[142677]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:53:36 compute-1 systemd-sysv-generator[142681]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:53:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:36.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:37 compute-1 systemd[1]: Started ovn_metadata_agent container.
Dec 07 09:53:37 compute-1 sudo[142538]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:37 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:53:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:37 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:53:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:37.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:37 compute-1 sudo[142712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:53:37 compute-1 sudo[142712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:53:37 compute-1 sudo[142712]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:38 compute-1 ceph-mon[80077]: pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 09:53:38 compute-1 sshd-session[133844]: Connection closed by 192.168.122.30 port 56748
Dec 07 09:53:38 compute-1 sshd-session[133841]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:53:38 compute-1 systemd[1]: session-51.scope: Deactivated successfully.
Dec 07 09:53:38 compute-1 systemd[1]: session-51.scope: Consumed 56.097s CPU time.
Dec 07 09:53:38 compute-1 systemd-logind[796]: Session 51 logged out. Waiting for processes to exit.
Dec 07 09:53:38 compute-1 systemd-logind[796]: Removed session 51.
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.586 142603 INFO neutron.common.config [-] Logging enabled!
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.586 142603 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.587 142603 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.587 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.587 142603 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.587 142603 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.587 142603 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.587 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.588 142603 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.588 142603 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.588 142603 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.588 142603 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.588 142603 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.588 142603 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.588 142603 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.589 142603 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.589 142603 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.589 142603 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.589 142603 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.589 142603 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.589 142603 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.589 142603 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.589 142603 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.589 142603 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.590 142603 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.590 142603 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.590 142603 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.590 142603 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.590 142603 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.590 142603 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.590 142603 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.590 142603 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.590 142603 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.591 142603 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.591 142603 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.591 142603 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.591 142603 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.591 142603 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.591 142603 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.591 142603 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.591 142603 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.591 142603 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.592 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.592 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.592 142603 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.592 142603 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.592 142603 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.592 142603 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.592 142603 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.592 142603 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.592 142603 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.592 142603 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.593 142603 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.593 142603 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.593 142603 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.593 142603 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.593 142603 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.593 142603 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.593 142603 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.594 142603 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.594 142603 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.594 142603 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.594 142603 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.594 142603 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.594 142603 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.594 142603 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.594 142603 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.594 142603 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.595 142603 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.595 142603 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.595 142603 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.595 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.595 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.595 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.595 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.595 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.595 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.596 142603 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.596 142603 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.596 142603 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.596 142603 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.596 142603 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.596 142603 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.596 142603 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.596 142603 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.596 142603 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.597 142603 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.597 142603 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.597 142603 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.597 142603 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.597 142603 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.597 142603 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.597 142603 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.597 142603 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.598 142603 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.598 142603 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.598 142603 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.598 142603 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.598 142603 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.598 142603 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.598 142603 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.598 142603 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.598 142603 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.598 142603 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.599 142603 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.599 142603 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.599 142603 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.599 142603 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.599 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.599 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.599 142603 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.599 142603 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.599 142603 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.600 142603 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.600 142603 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.600 142603 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.600 142603 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.600 142603 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.600 142603 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.600 142603 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.600 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.600 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.601 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.601 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.601 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.601 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.601 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.601 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.601 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.601 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.601 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.602 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.602 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.602 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.602 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.602 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.602 142603 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.602 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.602 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.602 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.603 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.603 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.603 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.603 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.603 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.603 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.603 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.603 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.603 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.604 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.604 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.604 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.604 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.604 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.604 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.604 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.604 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.604 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.605 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.605 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.605 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.605 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.605 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.605 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.605 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.605 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.606 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.606 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.606 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.606 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.606 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.606 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.607 142603 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.607 142603 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.607 142603 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.607 142603 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.607 142603 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.607 142603 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.607 142603 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.607 142603 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.607 142603 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.608 142603 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.608 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.608 142603 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.608 142603 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.608 142603 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.608 142603 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.608 142603 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.608 142603 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.608 142603 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.609 142603 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.609 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.609 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.609 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.609 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.609 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.609 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.609 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.609 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.610 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.610 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.610 142603 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.610 142603 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.610 142603 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.610 142603 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.610 142603 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.610 142603 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.610 142603 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.611 142603 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.611 142603 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.611 142603 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.611 142603 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.611 142603 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.611 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.611 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.612 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.612 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.612 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.612 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.612 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.612 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.612 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.612 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.612 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.613 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.613 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.613 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.613 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.613 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.613 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.613 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.613 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.613 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.613 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.614 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.614 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.614 142603 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.614 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.614 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.614 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.614 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.614 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.615 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.615 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.615 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.615 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.615 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.615 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.615 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.615 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.615 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.616 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.616 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.616 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.616 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.616 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.616 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.616 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.616 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.617 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.617 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.617 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.617 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.617 142603 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.617 142603 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.617 142603 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.617 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.618 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.618 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.618 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.618 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.618 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.618 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.618 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.618 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.618 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.619 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.619 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.619 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.619 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.619 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.619 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.619 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.619 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.620 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.620 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.620 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.620 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.620 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.620 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.620 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.620 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.621 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.621 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.621 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.621 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.621 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.621 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.621 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.621 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.621 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.621 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.622 142603 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.622 142603 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.630 142603 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.630 142603 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.630 142603 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.630 142603 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.631 142603 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.645 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name e231b22a-cdf9-44dd-ad96-a8e48b3d52da (UUID: e231b22a-cdf9-44dd-ad96-a8e48b3d52da) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.665 142603 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.665 142603 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.665 142603 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.665 142603 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.669 142603 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.675 142603 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.681 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'e231b22a-cdf9-44dd-ad96-a8e48b3d52da'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fe635b8f8b0>], external_ids={}, name=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, nb_cfg_timestamp=1765101151262, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.682 142603 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fe635b80f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.683 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.683 142603 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.683 142603 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.683 142603 INFO oslo_service.service [-] Starting 1 workers
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.687 142603 DEBUG oslo_service.service [-] Started child 142738 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.691 142603 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpk7lifi28/privsep.sock']
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.691 142738 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-425439'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.719 142738 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.719 142738 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.719 142738 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.723 142738 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.729 142738 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 07 09:53:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:38.736 142738 INFO eventlet.wsgi.server [-] (142738) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 07 09:53:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:53:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:38.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:53:39 compute-1 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 07 09:53:39 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:39.462 142603 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 07 09:53:39 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:39.463 142603 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpk7lifi28/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 07 09:53:39 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:39.296 142743 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 07 09:53:39 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:39.300 142743 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 07 09:53:39 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:39.302 142743 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 07 09:53:39 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:39.302 142743 INFO oslo.privsep.daemon [-] privsep daemon running as pid 142743
Dec 07 09:53:39 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:39.466 142743 DEBUG oslo.privsep.daemon [-] privsep: reply[9daa64ce-0934-44e2-aad5-5ac08139eb1b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 07 09:53:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:39.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.100 142743 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.100 142743 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.100 142743 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 09:53:40 compute-1 ceph-mon[80077]: pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.747 142743 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9edb16-40fd-458d-8c2a-ead731a4ccfc]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.749 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, column=external_ids, values=({'neutron:ovn-metadata-id': '34953944-114e-5cec-b000-63a37367e02a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.780 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.788 142603 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.788 142603 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.789 142603 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.789 142603 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.789 142603 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.789 142603 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.789 142603 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.789 142603 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.789 142603 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.789 142603 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.790 142603 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.790 142603 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.790 142603 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.790 142603 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.790 142603 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.790 142603 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.790 142603 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.790 142603 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.791 142603 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.791 142603 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.791 142603 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.791 142603 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.791 142603 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.791 142603 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.791 142603 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.791 142603 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.792 142603 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.792 142603 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.792 142603 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.792 142603 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.792 142603 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.792 142603 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.792 142603 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.793 142603 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.793 142603 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.793 142603 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.793 142603 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.793 142603 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.793 142603 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.793 142603 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.794 142603 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.794 142603 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.794 142603 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.794 142603 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.794 142603 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.794 142603 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.794 142603 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.794 142603 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.795 142603 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.795 142603 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.795 142603 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.795 142603 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.795 142603 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.795 142603 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.795 142603 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.795 142603 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.795 142603 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.795 142603 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.796 142603 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.796 142603 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.796 142603 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.796 142603 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.796 142603 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.796 142603 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.796 142603 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.796 142603 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.796 142603 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.797 142603 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.797 142603 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.797 142603 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.797 142603 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.797 142603 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.797 142603 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.797 142603 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.797 142603 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.797 142603 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.797 142603 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.798 142603 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.798 142603 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.798 142603 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.798 142603 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.798 142603 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.798 142603 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.798 142603 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.798 142603 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.798 142603 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.798 142603 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.799 142603 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.799 142603 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.799 142603 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.799 142603 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.799 142603 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.799 142603 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.799 142603 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.799 142603 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.799 142603 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.799 142603 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.800 142603 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.800 142603 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.800 142603 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.800 142603 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.800 142603 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.800 142603 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.800 142603 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.800 142603 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.801 142603 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.801 142603 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.801 142603 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.801 142603 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.801 142603 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.801 142603 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.801 142603 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.801 142603 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.802 142603 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.802 142603 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.802 142603 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.802 142603 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.802 142603 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.802 142603 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.802 142603 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.802 142603 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.803 142603 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.803 142603 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.803 142603 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.803 142603 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.803 142603 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.803 142603 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.803 142603 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.804 142603 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.804 142603 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.804 142603 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.804 142603 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.804 142603 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.804 142603 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.804 142603 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.804 142603 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.805 142603 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.805 142603 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.805 142603 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.805 142603 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.805 142603 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.805 142603 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.805 142603 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.805 142603 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.806 142603 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.806 142603 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.806 142603 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.806 142603 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.806 142603 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.806 142603 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.806 142603 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.807 142603 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.807 142603 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.807 142603 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.807 142603 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.807 142603 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.807 142603 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.807 142603 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.807 142603 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.807 142603 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.807 142603 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.807 142603 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.808 142603 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.808 142603 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.808 142603 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.808 142603 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.808 142603 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.808 142603 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.808 142603 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.808 142603 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.808 142603 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.809 142603 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.809 142603 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.809 142603 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.809 142603 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.809 142603 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.809 142603 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.809 142603 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.809 142603 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.809 142603 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.810 142603 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.810 142603 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.810 142603 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.810 142603 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.810 142603 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.810 142603 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.810 142603 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.811 142603 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.811 142603 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.811 142603 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.811 142603 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.811 142603 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.811 142603 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.811 142603 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.811 142603 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.812 142603 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.812 142603 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.812 142603 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.812 142603 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.812 142603 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.812 142603 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.812 142603 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.813 142603 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.813 142603 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.813 142603 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.813 142603 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.813 142603 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.813 142603 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.813 142603 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.814 142603 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.814 142603 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.814 142603 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.814 142603 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.814 142603 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.814 142603 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.814 142603 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.814 142603 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.815 142603 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.815 142603 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.815 142603 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.815 142603 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.815 142603 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.815 142603 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.815 142603 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.815 142603 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.816 142603 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.816 142603 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.816 142603 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.816 142603 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.816 142603 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.816 142603 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.816 142603 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.816 142603 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.817 142603 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.817 142603 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.817 142603 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.817 142603 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.817 142603 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.817 142603 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.817 142603 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.818 142603 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.818 142603 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.818 142603 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.818 142603 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.818 142603 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.818 142603 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.818 142603 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.819 142603 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.819 142603 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.819 142603 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.819 142603 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.819 142603 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.819 142603 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.819 142603 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.820 142603 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.820 142603 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.820 142603 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.820 142603 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.820 142603 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.820 142603 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.820 142603 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.821 142603 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.821 142603 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.821 142603 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.821 142603 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.821 142603 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.821 142603 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.821 142603 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.822 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.822 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.822 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.822 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.826 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.827 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.827 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.827 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.827 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.827 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.827 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.827 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.828 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.828 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.828 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.828 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.828 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.828 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.828 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.829 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.829 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.829 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.829 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.829 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.829 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.829 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.830 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.830 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.830 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.830 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.830 142603 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.830 142603 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.830 142603 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.831 142603 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.831 142603 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 09:53:40 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:53:40.831 142603 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 07 09:53:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:40.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:53:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:41.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:53:42 compute-1 ceph-mon[80077]: pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 09:53:42 compute-1 sudo[142750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:53:42 compute-1 sudo[142750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:53:42 compute-1 sudo[142750]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:42 compute-1 sudo[142775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:53:42 compute-1 sudo[142775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:53:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:42.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:53:43 compute-1 sudo[142775]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:43 compute-1 sshd-session[142821]: Accepted publickey for zuul from 192.168.122.30 port 43046 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:53:43 compute-1 systemd-logind[796]: New session 52 of user zuul.
Dec 07 09:53:43 compute-1 systemd[1]: Started Session 52 of User zuul.
Dec 07 09:53:43 compute-1 sshd-session[142821]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:53:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:43 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16b4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:43.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:44 compute-1 ceph-mon[80077]: pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 09:53:44 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:53:44 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:53:44 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:53:44 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:53:44 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:53:44 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:53:44 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:53:44 compute-1 python3.9[142999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:53:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:44 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16a8001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:53:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:44.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:53:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:53:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:45 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1690000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:45 compute-1 sudo[143154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcrayfwplxvhltltavykvyjzlimnmplp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101224.960078-63-166821839031523/AnsiballZ_command.py'
Dec 07 09:53:45 compute-1 sudo[143154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:45 compute-1 python3.9[143156]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:53:45 compute-1 sudo[143154]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:45 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1688000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:53:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:45.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:53:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:46 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1694000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:53:46 compute-1 ceph-mon[80077]: pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:53:46 compute-1 sudo[143320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdlywlrnbpkdoigbdvhdmgixrlkkasli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101226.2018368-96-199810728868214/AnsiballZ_systemd_service.py'
Dec 07 09:53:46 compute-1 sudo[143320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:46.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:47 compute-1 kernel: ganesha.nfsd[142901]: segfault at 50 ip 00007f175fd4532e sp 00007f1713ffe210 error 4 in libntirpc.so.5.8[7f175fd2a000+2c000] likely on CPU 3 (core 0, socket 3)
Dec 07 09:53:47 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 09:53:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[141799]: 07/12/2025 09:53:47 : epoch 69354e9b : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f16a8001c00 fd 39 proxy ignored for local
Dec 07 09:53:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095347 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:53:47 compute-1 systemd[1]: Started Process Core Dump (PID 143323/UID 0).
Dec 07 09:53:47 compute-1 python3.9[143322]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 07 09:53:47 compute-1 systemd[1]: Reloading.
Dec 07 09:53:47 compute-1 systemd-rc-local-generator[143351]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:53:47 compute-1 systemd-sysv-generator[143354]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:53:47 compute-1 sudo[143320]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:47.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:48.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:49 compute-1 python3.9[143511]: ansible-ansible.builtin.service_facts Invoked
Dec 07 09:53:49 compute-1 network[143528]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 07 09:53:49 compute-1 network[143529]: 'network-scripts' will be removed from distribution in near future.
Dec 07 09:53:49 compute-1 network[143530]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 07 09:53:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:49.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:50 compute-1 systemd-coredump[143324]: Process 141803 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f175fd4532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 09:53:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:53:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:53:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:50.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:53:51 compute-1 systemd[1]: systemd-coredump@4-143323-0.service: Deactivated successfully.
Dec 07 09:53:51 compute-1 systemd[1]: systemd-coredump@4-143323-0.service: Consumed 1.424s CPU time.
Dec 07 09:53:51 compute-1 podman[143601]: 2025-12-07 09:53:51.101800505 +0000 UTC m=+0.038836679 container died 857e999d5caa585a29da66c051b38fb7d1c5b547b61f83905a0f84100372739e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:53:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:51.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:52 compute-1 ceph-mon[80077]: pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 2 op/s
Dec 07 09:53:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:52.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:53:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:53.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:53:53 compute-1 systemd[1]: var-lib-containers-storage-overlay-41074d06281cd31323fceb0b757527ee9df82824452eb01a9a6fdd812e7dbef4-merged.mount: Deactivated successfully.
Dec 07 09:53:53 compute-1 ceph-mon[80077]: pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 2 op/s
Dec 07 09:53:53 compute-1 ceph-mon[80077]: pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 09:53:53 compute-1 podman[143601]: 2025-12-07 09:53:53.868685547 +0000 UTC m=+2.805721721 container remove 857e999d5caa585a29da66c051b38fb7d1c5b547b61f83905a0f84100372739e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:53:53 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 09:53:54 compute-1 sudo[143712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:53:54 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 09:53:54 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.577s CPU time.
Dec 07 09:53:54 compute-1 sudo[143712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:53:54 compute-1 sudo[143712]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:54 compute-1 sudo[143876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkbleeinofgkqtuiavdokdqpptirgssx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101234.1173556-153-171239158944852/AnsiballZ_systemd_service.py'
Dec 07 09:53:54 compute-1 sudo[143876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:54 compute-1 python3.9[143878]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:53:54 compute-1 sudo[143876]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:54 compute-1 ceph-mon[80077]: pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:53:54 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:53:54 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:53:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:53:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:54.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:53:55 compute-1 sudo[144029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rylntypnpoidzzndoioirbjtwdzgvjkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101234.8298876-153-125413487463964/AnsiballZ_systemd_service.py'
Dec 07 09:53:55 compute-1 sudo[144029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095355 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:53:55 compute-1 python3.9[144031]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:53:55 compute-1 sudo[144029]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:53:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:53:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:55.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:53:55 compute-1 sudo[144182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjeiordiirylbvyozsslyjovcrpqrhgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101235.5491674-153-100180184514551/AnsiballZ_systemd_service.py'
Dec 07 09:53:55 compute-1 sudo[144182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:55 compute-1 ceph-mon[80077]: pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:53:56 compute-1 python3.9[144184]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:53:56 compute-1 sudo[144182]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:56 compute-1 sudo[144336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjuagbijzvsbpntedscbysnjivddvpvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101236.26369-153-47026256304758/AnsiballZ_systemd_service.py'
Dec 07 09:53:56 compute-1 sudo[144336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:56 compute-1 python3.9[144338]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:53:56 compute-1 sudo[144336]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:56.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:53:57 compute-1 sudo[144506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgcnxyvworgsxmscbqfdjjyuyvgeaxqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101236.9671917-153-117048680479408/AnsiballZ_systemd_service.py'
Dec 07 09:53:57 compute-1 sudo[144506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:57 compute-1 podman[144463]: 2025-12-07 09:53:57.305530006 +0000 UTC m=+0.080213482 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 07 09:53:57 compute-1 python3.9[144514]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:53:57 compute-1 sudo[144506]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:53:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:57.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:53:58 compute-1 sudo[144642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:53:58 compute-1 sudo[144642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:53:58 compute-1 sudo[144642]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:58 compute-1 sudo[144693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olnusnmcfwcmvfyofuqudwlmkftrkkqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101237.7553756-153-46892443721360/AnsiballZ_systemd_service.py'
Dec 07 09:53:58 compute-1 sudo[144693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:58 compute-1 ceph-mon[80077]: pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:53:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:53:58 compute-1 python3.9[144695]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:53:58 compute-1 sudo[144693]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:58 compute-1 sudo[144847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydljhnuvplbljrbognypyxvpudomojsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101238.5371046-153-179348842031234/AnsiballZ_systemd_service.py'
Dec 07 09:53:58 compute-1 sudo[144847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:53:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:53:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:53:58.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:53:59 compute-1 python3.9[144849]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:53:59 compute-1 sudo[144847]: pam_unix(sudo:session): session closed for user root
Dec 07 09:53:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:53:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:53:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:53:59.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:00.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:54:01 compute-1 sudo[145002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvoomfxfmghjyftdycxdjmgxhqyfmxdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101241.024982-309-235006255468376/AnsiballZ_file.py'
Dec 07 09:54:01 compute-1 sudo[145002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:01 compute-1 ceph-mon[80077]: pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:54:01 compute-1 python3.9[145004]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:01 compute-1 sudo[145002]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:01.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:02 compute-1 sudo[145154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoeuhdkbdltruoqghzlomhqautrrmxzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101241.7806728-309-39712461279761/AnsiballZ_file.py'
Dec 07 09:54:02 compute-1 sudo[145154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:02 compute-1 python3.9[145156]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:02 compute-1 sudo[145154]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:02 compute-1 ceph-mon[80077]: pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:54:02 compute-1 sudo[145307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leopcqokyxsppzcvroqbrwsmatylscvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101242.3578358-309-258703197085184/AnsiballZ_file.py'
Dec 07 09:54:02 compute-1 sudo[145307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:02 compute-1 python3.9[145309]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:02 compute-1 sudo[145307]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:02.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:03 compute-1 sudo[145459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgkdtlornimdffcfkkxxscxheepecfjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101242.994211-309-212186724465364/AnsiballZ_file.py'
Dec 07 09:54:03 compute-1 sudo[145459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:03 compute-1 python3.9[145461]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:03 compute-1 sudo[145459]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:03 compute-1 ceph-mon[80077]: pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:54:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:03.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:03 compute-1 sudo[145611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-covkrxbydkwybslsgqgezmkybpdzrdso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101243.6414838-309-229019018008513/AnsiballZ_file.py'
Dec 07 09:54:03 compute-1 sudo[145611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:04 compute-1 python3.9[145613]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:04 compute-1 sudo[145611]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:04 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 5.
Dec 07 09:54:04 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:54:04 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.577s CPU time.
Dec 07 09:54:04 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:54:04 compute-1 podman[145763]: 2025-12-07 09:54:04.434418902 +0000 UTC m=+0.044612725 container create 03a124eb13e28113f1f948c170094d32709e076e5a926740ef26dfb5a5924436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 07 09:54:04 compute-1 sudo[145823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trxwvurvjultxujgdjwjhdxawrallmwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101244.2523339-309-257874758061350/AnsiballZ_file.py'
Dec 07 09:54:04 compute-1 sudo[145823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dcf2e693f084487d745dba7fe6426dce8de17ed6533961ce722c5d8d2b9204a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 09:54:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dcf2e693f084487d745dba7fe6426dce8de17ed6533961ce722c5d8d2b9204a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:54:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dcf2e693f084487d745dba7fe6426dce8de17ed6533961ce722c5d8d2b9204a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:54:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dcf2e693f084487d745dba7fe6426dce8de17ed6533961ce722c5d8d2b9204a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:54:04 compute-1 podman[145763]: 2025-12-07 09:54:04.489218991 +0000 UTC m=+0.099412844 container init 03a124eb13e28113f1f948c170094d32709e076e5a926740ef26dfb5a5924436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:54:04 compute-1 podman[145763]: 2025-12-07 09:54:04.493636528 +0000 UTC m=+0.103830351 container start 03a124eb13e28113f1f948c170094d32709e076e5a926740ef26dfb5a5924436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 07 09:54:04 compute-1 bash[145763]: 03a124eb13e28113f1f948c170094d32709e076e5a926740ef26dfb5a5924436
Dec 07 09:54:04 compute-1 podman[145763]: 2025-12-07 09:54:04.413320615 +0000 UTC m=+0.023514458 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:54:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:04 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 09:54:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:04 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 09:54:04 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:54:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:04 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 09:54:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:04 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 09:54:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:04 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 09:54:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:04 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 09:54:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:04 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 09:54:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:04 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:54:04 compute-1 python3.9[145827]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:04 compute-1 sudo[145823]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:04.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:05 compute-1 sudo[146019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrpkowzpmfomhkywpeenaxuoezhvdwcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101244.78885-309-7262860605452/AnsiballZ_file.py'
Dec 07 09:54:05 compute-1 sudo[146019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:05 compute-1 python3.9[146021]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:05 compute-1 sudo[146019]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:05.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:06 compute-1 ceph-mon[80077]: pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:54:06 compute-1 sudo[146172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfspcdohmpohelrfzjwlrgmzqiwnmwct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101246.1577065-459-98315866099950/AnsiballZ_file.py'
Dec 07 09:54:06 compute-1 sudo[146172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:06 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:54:06 compute-1 python3.9[146174]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:06 compute-1 sudo[146172]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:06 compute-1 sudo[146324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbzypjzbozqynclvzeewezarhpajsbyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101246.7472265-459-223613144916420/AnsiballZ_file.py'
Dec 07 09:54:06 compute-1 sudo[146324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:06.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:07 compute-1 python3.9[146326]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:07 compute-1 sudo[146324]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:07 compute-1 podman[146439]: 2025-12-07 09:54:07.541880794 +0000 UTC m=+0.044441081 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 07 09:54:07 compute-1 sudo[146495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slancmjzeblkzwuumvouqjbzyutiakes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101247.3067613-459-162029801111718/AnsiballZ_file.py'
Dec 07 09:54:07 compute-1 sudo[146495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:07.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:07 compute-1 python3.9[146497]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:07 compute-1 sudo[146495]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:08 compute-1 sudo[146648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvshkshgadiojjbnlfpbddkanjcwuxvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101247.883304-459-27639870888294/AnsiballZ_file.py'
Dec 07 09:54:08 compute-1 sudo[146648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:08 compute-1 ceph-mon[80077]: pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:54:08 compute-1 python3.9[146650]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:08 compute-1 sudo[146648]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:08 compute-1 sudo[146800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyqtgrjmwsgzcstmgqjtkjemvviwzwyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101248.5009594-459-186254885102173/AnsiballZ_file.py'
Dec 07 09:54:08 compute-1 sudo[146800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:08 compute-1 python3.9[146802]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:08 compute-1 sudo[146800]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:08.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095409 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:54:09 compute-1 sudo[146952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qblohryqndqlpnufjyrczgcmqhoianfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101249.1289606-459-187410918355437/AnsiballZ_file.py'
Dec 07 09:54:09 compute-1 sudo[146952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:09 compute-1 python3.9[146954]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:09 compute-1 sudo[146952]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:09.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:10 compute-1 ceph-mon[80077]: pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:54:10 compute-1 sudo[147105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oumeoaxvtaihzcjvnsbwrvlfgczmrtsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101249.9768412-459-216088318706102/AnsiballZ_file.py'
Dec 07 09:54:10 compute-1 sudo[147105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:10 compute-1 python3.9[147107]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:54:10 compute-1 sudo[147105]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:10 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:54:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:10 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:54:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:10 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 07 09:54:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:10.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:11 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:54:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:11.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:11 compute-1 sudo[147257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxlwjgvyoryhwjjgnpjdixgphyvqxjrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101251.6283321-612-101936765597820/AnsiballZ_command.py'
Dec 07 09:54:11 compute-1 sudo[147257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:12 compute-1 python3.9[147259]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:54:12 compute-1 sudo[147257]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:12 compute-1 ceph-mon[80077]: pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec 07 09:54:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:12.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:13 compute-1 python3.9[147412]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 07 09:54:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:54:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:13.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:13 compute-1 sudo[147562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiwimbrfyfellsxwfnjqukslkvyrkpwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101253.5013847-666-123579162980665/AnsiballZ_systemd_service.py'
Dec 07 09:54:13 compute-1 sudo[147562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:14 compute-1 python3.9[147564]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 07 09:54:14 compute-1 systemd[1]: Reloading.
Dec 07 09:54:14 compute-1 systemd-sysv-generator[147592]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:54:14 compute-1 systemd-rc-local-generator[147589]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:54:14 compute-1 ceph-mon[80077]: pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec 07 09:54:14 compute-1 sudo[147562]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:14 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:54:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:14 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:54:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:14 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:54:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:15.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:15 compute-1 sudo[147749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oixepgpzzsdrxvxpfkscbyzqzompgbwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101254.7471726-690-41310730534213/AnsiballZ_command.py'
Dec 07 09:54:15 compute-1 sudo[147749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:15 compute-1 python3.9[147751]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:54:15 compute-1 sudo[147749]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:15.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:15 compute-1 sudo[147902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejgrjslektfdlsweyvukazyglgznnawf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101255.4504423-690-125214197903133/AnsiballZ_command.py'
Dec 07 09:54:15 compute-1 sudo[147902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:16 compute-1 python3.9[147904]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:54:16 compute-1 sudo[147902]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:16 compute-1 ceph-mon[80077]: pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:54:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:54:16 compute-1 sudo[148056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utangvrfpwjwqrmcieifmysvrlkskfto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101256.1962554-690-138441623858211/AnsiballZ_command.py'
Dec 07 09:54:16 compute-1 sudo[148056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:16 compute-1 python3.9[148058]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:54:16 compute-1 sudo[148056]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 09:54:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:17.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 09:54:17 compute-1 sudo[148209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxzmzlussnfebuffljfbaqrtfflkxdkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101256.878312-690-261348255498051/AnsiballZ_command.py'
Dec 07 09:54:17 compute-1 sudo[148209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:17 compute-1 python3.9[148211]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:54:17 compute-1 sudo[148209]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:17.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:17 compute-1 ceph-mon[80077]: pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:54:17 compute-1 sudo[148362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znzqfnyxsbtmpwvrkyvtkcilkhjnxgkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101257.5378692-690-228642683751704/AnsiballZ_command.py'
Dec 07 09:54:17 compute-1 sudo[148362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:18 compute-1 python3.9[148364]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:54:18 compute-1 sudo[148362]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:18 compute-1 sudo[148365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:54:18 compute-1 sudo[148365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:54:18 compute-1 sudo[148365]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:18 compute-1 sudo[148541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eswqpncicpdpliyulequttfahsjktdmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101258.2695885-690-238726713614059/AnsiballZ_command.py'
Dec 07 09:54:18 compute-1 sudo[148541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:18 compute-1 python3.9[148543]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:54:18 compute-1 sudo[148541]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000029s ======
Dec 07 09:54:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:19.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 07 09:54:19 compute-1 sudo[148694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yigiocpyniyrwjhncbmqrjappdrwevmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101259.1040251-690-168767791838558/AnsiballZ_command.py'
Dec 07 09:54:19 compute-1 sudo[148694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:19 compute-1 python3.9[148696]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:54:19 compute-1 sudo[148694]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:19.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:19 compute-1 ceph-mon[80077]: pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:54:20 compute-1 sshd-session[148698]: Connection closed by authenticating user root 104.248.193.130 port 60208 [preauth]
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:54:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:54:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:21.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:21 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc0000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:21 compute-1 sudo[148866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lndfacfkbdnviwlwuhrdlyiaqlzlrpgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101260.95539-852-63047305860749/AnsiballZ_getent.py'
Dec 07 09:54:21 compute-1 sudo[148866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:54:21 compute-1 python3.9[148868]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 07 09:54:21 compute-1 sudo[148866]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:21 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb8001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:21.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:21 compute-1 ceph-mon[80077]: pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 09:54:22 compute-1 sudo[149020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjtcdvvzacsldlsbdsvuruyawuinykho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101261.8636684-876-177278777661484/AnsiballZ_group.py'
Dec 07 09:54:22 compute-1 sudo[149020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:22 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb8001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:22 compute-1 python3.9[149022]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 07 09:54:22 compute-1 groupadd[149023]: group added to /etc/group: name=libvirt, GID=42473
Dec 07 09:54:22 compute-1 groupadd[149023]: group added to /etc/gshadow: name=libvirt
Dec 07 09:54:22 compute-1 groupadd[149023]: new group: name=libvirt, GID=42473
Dec 07 09:54:22 compute-1 sudo[149020]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:23.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095423 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:54:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:23 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf94000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:23 compute-1 sudo[149178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzgbacwztgzdfncheeifdaqhqvkydxbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101262.957011-900-219347135814189/AnsiballZ_user.py'
Dec 07 09:54:23 compute-1 sudo[149178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:23 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa0000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:23.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:23 compute-1 python3.9[149180]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 07 09:54:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:23 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:54:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:23 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:54:23 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 09:54:23 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 09:54:23 compute-1 useradd[149182]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 07 09:54:23 compute-1 sudo[149178]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:23 compute-1 ceph-mon[80077]: pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Dec 07 09:54:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:24 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:24 compute-1 sudo[149340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhfcvqjszmxcxxkwlayzrkyhnrmwpqch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101264.4325612-933-244201676737057/AnsiballZ_setup.py'
Dec 07 09:54:24 compute-1 sudo[149340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:24 compute-1 python3.9[149342]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:54:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:25.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:25 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:25 compute-1 sudo[149340]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:25 compute-1 sudo[149424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuvmgockutigfqfpracifyrdbkpymxix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101264.4325612-933-244201676737057/AnsiballZ_dnf.py'
Dec 07 09:54:25 compute-1 sudo[149424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:54:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:25 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:25 compute-1 python3.9[149426]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:54:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:25.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:26 compute-1 ceph-mon[80077]: pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 07 09:54:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:26 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa0001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:26 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:54:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:26 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:54:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:27.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:27 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa0001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:27 compute-1 ceph-mon[80077]: pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 07 09:54:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:54:27 compute-1 podman[149432]: 2025-12-07 09:54:27.620681664 +0000 UTC m=+0.124490169 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:54:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:27 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:27.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:28 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:29.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:29 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095429 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:54:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:29 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa0001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:29.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:30 compute-1 ceph-mon[80077]: pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 07 09:54:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:30 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:31.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:31 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf94001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:31 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:54:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:31 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:31.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:32 compute-1 ceph-mon[80077]: pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 07 09:54:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:32 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa0001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:33.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:33 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c002140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:33 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf94001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:33.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:34 compute-1 ceph-mon[80077]: pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 07 09:54:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:34 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:54:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:35.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:54:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:35 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa0003340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:35 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c002140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:35.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:36 compute-1 ceph-mon[80077]: pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 07 09:54:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:36 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf94002160 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:36 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:54:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:37.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:37 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:37 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa0003340 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:37.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:38 compute-1 ceph-mon[80077]: pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:54:38 compute-1 sudo[149644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:54:38 compute-1 sudo[149644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:54:38 compute-1 sudo[149644]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:38 compute-1 podman[149668]: 2025-12-07 09:54:38.33525865 +0000 UTC m=+0.062839747 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:54:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:38 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c002140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:54:38.624 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 09:54:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:54:38.624 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 09:54:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:54:38.625 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 09:54:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:39.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:39 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:39 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:39.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:40 compute-1 ceph-mon[80077]: pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:54:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:40 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:41.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:41 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:41 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:54:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:41 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:54:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:41.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:54:42 compute-1 ceph-mon[80077]: pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:54:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:42 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:43.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:43 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:54:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:43 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:43.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:44 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:44 compute-1 ceph-mon[80077]: pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:54:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:45.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:45 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:45 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:54:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:45.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:54:46 compute-1 ceph-mon[80077]: pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:54:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:46 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:46 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:54:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:54:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:47.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:54:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:47 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:47 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:47.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:48 compute-1 ceph-mon[80077]: pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:54:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:48 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:49.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:49 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:49 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:49.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:50 compute-1 ceph-mon[80077]: pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:54:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:50 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:54:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:51.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:54:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:51 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa0004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:51 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c003490 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:54:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:54:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:51.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:54:51 compute-1 ceph-mon[80077]: pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:54:52 compute-1 kernel: SELinux:  Converting 2772 SID table entries...
Dec 07 09:54:52 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 07 09:54:52 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 07 09:54:52 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 07 09:54:52 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 07 09:54:52 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 07 09:54:52 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 07 09:54:52 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 07 09:54:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:52 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:53.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:53 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:53 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc0002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:54:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:53.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:54:54 compute-1 ceph-mon[80077]: pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:54:54 compute-1 sudo[149714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:54:54 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec 07 09:54:54 compute-1 sudo[149714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:54:54 compute-1 sudo[149714]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:54 compute-1 sudo[149739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 07 09:54:54 compute-1 sudo[149739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:54:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:54 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:55 compute-1 podman[149838]: 2025-12-07 09:54:55.028060143 +0000 UTC m=+0.229743794 container exec 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 09:54:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:55.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 07 09:54:55 compute-1 podman[149838]: 2025-12-07 09:54:55.148992315 +0000 UTC m=+0.350675926 container exec_died 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec 07 09:54:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:55 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:55 compute-1 podman[149958]: 2025-12-07 09:54:55.694610509 +0000 UTC m=+0.078751129 container exec 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 09:54:55 compute-1 podman[149958]: 2025-12-07 09:54:55.706004918 +0000 UTC m=+0.090145518 container exec_died 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 09:54:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:55 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:55.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:56 compute-1 podman[150049]: 2025-12-07 09:54:56.124346289 +0000 UTC m=+0.078222724 container exec 03a124eb13e28113f1f948c170094d32709e076e5a926740ef26dfb5a5924436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec 07 09:54:56 compute-1 podman[150049]: 2025-12-07 09:54:56.1450562 +0000 UTC m=+0.098932665 container exec_died 03a124eb13e28113f1f948c170094d32709e076e5a926740ef26dfb5a5924436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 09:54:56 compute-1 ceph-mon[80077]: pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:54:56 compute-1 podman[150114]: 2025-12-07 09:54:56.458216837 +0000 UTC m=+0.072560650 container exec beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 09:54:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:56 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc0002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:56 compute-1 podman[150114]: 2025-12-07 09:54:56.469515794 +0000 UTC m=+0.083859607 container exec_died beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 09:54:56 compute-1 podman[150180]: 2025-12-07 09:54:56.708506079 +0000 UTC m=+0.056240338 container exec 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, release=1793)
Dec 07 09:54:56 compute-1 podman[150180]: 2025-12-07 09:54:56.723082284 +0000 UTC m=+0.070816533 container exec_died 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, description=keepalived for Ceph, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.buildah.version=1.28.2, com.redhat.component=keepalived-container)
Dec 07 09:54:56 compute-1 sudo[149739]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:54:56 compute-1 sudo[150210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:54:56 compute-1 sudo[150210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:54:56 compute-1 sudo[150210]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:56 compute-1 sudo[150235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:54:56 compute-1 sudo[150235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:54:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:54:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:57.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:54:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:57 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf900016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:57 compute-1 sudo[150235]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:57 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:57.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:54:57 compute-1 ceph-mon[80077]: pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:54:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:54:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:54:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:54:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:54:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:54:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 07 09:54:58 compute-1 sudo[150294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:54:58 compute-1 sudo[150294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:54:58 compute-1 sudo[150294]: pam_unix(sudo:session): session closed for user root
Dec 07 09:54:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:58 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:58 compute-1 podman[150318]: 2025-12-07 09:54:58.537452603 +0000 UTC m=+0.152380475 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 07 09:54:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 07 09:54:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:54:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:54:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:54:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:54:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:54:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:54:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:54:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:54:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:54:59.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:54:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:59 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc0002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:54:59 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf900016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:54:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:54:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:54:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:54:59.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:00 compute-1 ceph-mon[80077]: pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:00 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:01.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:01 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:01 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc0002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:55:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:01.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:02 compute-1 ceph-mon[80077]: pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:55:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:02 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf900016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:02 compute-1 kernel: SELinux:  Converting 2772 SID table entries...
Dec 07 09:55:02 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 07 09:55:02 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 07 09:55:02 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 07 09:55:02 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 07 09:55:02 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 07 09:55:02 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 07 09:55:02 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 07 09:55:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:03.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:03 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:03 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:03.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:04 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc00091b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:05 compute-1 ceph-mon[80077]: pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:55:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:05.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:55:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:05 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:05 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:55:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:05.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:55:05 compute-1 sshd-session[150357]: Connection closed by authenticating user root 104.248.193.130 port 50684 [preauth]
Dec 07 09:55:06 compute-1 sudo[150360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:55:06 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 07 09:55:06 compute-1 sudo[150360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:55:06 compute-1 sudo[150360]: pam_unix(sudo:session): session closed for user root
Dec 07 09:55:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:06 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:07.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:55:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:07 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc00091b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:07 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:07 compute-1 ceph-mon[80077]: pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:07 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:55:07 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:55:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:08 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:08 compute-1 podman[150386]: 2025-12-07 09:55:08.591431776 +0000 UTC m=+0.079241921 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 07 09:55:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:55:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:09.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:55:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:09 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:09 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc0009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:09.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:09 compute-1 ceph-mon[80077]: pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:10 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:55:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:11.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:55:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:11 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:11 compute-1 ceph-mon[80077]: pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:11 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:11.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:55:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:12 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc0009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:12 compute-1 ceph-mon[80077]: pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:55:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:55:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:13.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:55:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:13 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:13 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:13.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:55:13 compute-1 ceph-mon[80077]: pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:14 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:15.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:15 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc0009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:15 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:15.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:16 compute-1 ceph-mon[80077]: pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:16 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:55:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:17.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:55:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:55:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:17 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:17 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc0009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:17.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:18 compute-1 ceph-mon[80077]: pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:18 compute-1 sudo[152093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:55:18 compute-1 sudo[152093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:55:18 compute-1 sudo[152093]: pam_unix(sudo:session): session closed for user root
Dec 07 09:55:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:18 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:19.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:19 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:19 compute-1 ceph-mon[80077]: pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:19 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:19.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:20 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc0009ec0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:21.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:21 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:21 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf90003820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:21 compute-1 ceph-mon[80077]: pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:55:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:21.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:55:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:22 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:23.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:23 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfb80029b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:23 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:23.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:24 compute-1 ceph-mon[80077]: pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:24 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:55:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:25.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:55:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:25 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:25 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc000a7e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:25.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:26 compute-1 ceph-mon[80077]: pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:26 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:27.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:55:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:27 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:27 compute-1 ceph-mon[80077]: pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:55:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:27 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:27.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:28 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:55:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:29.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:55:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:29 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:29 compute-1 podman[159083]: 2025-12-07 09:55:29.650211281 +0000 UTC m=+0.136723070 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 07 09:55:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:29 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:29 compute-1 ceph-mon[80077]: pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:29.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:30 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:55:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:31.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:55:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:31 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:31 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c002870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:55:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:31.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:55:31 compute-1 ceph-mon[80077]: pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:55:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:55:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:32 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:33.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:33 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:33 compute-1 ceph-mon[80077]: pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:33 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:33.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:34 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c002870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:35.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:35 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc000a7e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:35 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:35.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:35 compute-1 ceph-mon[80077]: pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:36 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:55:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:37.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:55:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:37 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c002870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:55:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:37 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc000a7e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:37.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:37 compute-1 ceph-mon[80077]: pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:38 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:38 compute-1 sudo[164398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:55:38 compute-1 sudo[164398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:55:38 compute-1 sudo[164398]: pam_unix(sudo:session): session closed for user root
Dec 07 09:55:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:55:38.625 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 09:55:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:55:38.626 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 09:55:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:55:38.626 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 09:55:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:55:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:39.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:55:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:39 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:39 compute-1 podman[165019]: 2025-12-07 09:55:39.566495424 +0000 UTC m=+0.071801567 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 09:55:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:39 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c002870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:39.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:40 compute-1 ceph-mon[80077]: pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:40 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc000a7e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:55:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:41.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:55:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:41 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:41 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:41.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:55:42 compute-1 ceph-mon[80077]: pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:55:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:42 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf9c002870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:43.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:43 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc000a7e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:55:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:43 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf940032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:43.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:44 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:44 compute-1 ceph-mon[80077]: pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:45.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:45 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:45 compute-1 ceph-mon[80077]: pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:45 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc000a7e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:45.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:46 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf94004390 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:47.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:47 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:55:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:47 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:47.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:48 compute-1 ceph-mon[80077]: pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:48 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc000a7e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:55:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:49.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:55:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:49 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdf94004390 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:49 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:49.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:50 compute-1 ceph-mon[80077]: pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:50 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfa00014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:55:51 compute-1 sshd-session[167344]: Connection closed by authenticating user root 104.248.193.130 port 39262 [preauth]
Dec 07 09:55:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:51.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[145828]: 07/12/2025 09:55:51 : epoch 69354ebc : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdfc000a7e0 fd 42 proxy ignored for local
Dec 07 09:55:51 compute-1 kernel: ganesha.nfsd[149710]: segfault at 50 ip 00007fe06a19e32e sp 00007fe0237fd210 error 4 in libntirpc.so.5.8[7fe06a183000+2c000] likely on CPU 7 (core 0, socket 7)
Dec 07 09:55:51 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 09:55:51 compute-1 systemd[1]: Started Process Core Dump (PID 167346/UID 0).
Dec 07 09:55:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:51.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:55:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:53.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:55:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:53.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:55:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:55.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:55 compute-1 systemd-coredump[167347]: Process 145832 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 56:
                                                    #0  0x00007fe06a19e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 09:55:55 compute-1 systemd[1]: systemd-coredump@5-167346-0.service: Deactivated successfully.
Dec 07 09:55:55 compute-1 systemd[1]: systemd-coredump@5-167346-0.service: Consumed 1.151s CPU time.
Dec 07 09:55:55 compute-1 ceph-mon[80077]: pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:55:55 compute-1 podman[167354]: 2025-12-07 09:55:55.591973672 +0000 UTC m=+0.038743409 container died 03a124eb13e28113f1f948c170094d32709e076e5a926740ef26dfb5a5924436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 09:55:55 compute-1 systemd[1]: var-lib-containers-storage-overlay-1dcf2e693f084487d745dba7fe6426dce8de17ed6533961ce722c5d8d2b9204a-merged.mount: Deactivated successfully.
Dec 07 09:55:55 compute-1 podman[167354]: 2025-12-07 09:55:55.705394402 +0000 UTC m=+0.152164119 container remove 03a124eb13e28113f1f948c170094d32709e076e5a926740ef26dfb5a5924436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 09:55:55 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 09:55:55 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 09:55:55 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.471s CPU time.
Dec 07 09:55:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:55.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:56 compute-1 ceph-mon[80077]: pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:56 compute-1 kernel: SELinux:  Converting 2773 SID table entries...
Dec 07 09:55:56 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 07 09:55:56 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 07 09:55:56 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 07 09:55:56 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 07 09:55:56 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 07 09:55:56 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 07 09:55:56 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 07 09:55:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:57 compute-1 ceph-mon[80077]: pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:57.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:57 compute-1 groupadd[167408]: group added to /etc/group: name=dnsmasq, GID=992
Dec 07 09:55:57 compute-1 groupadd[167408]: group added to /etc/gshadow: name=dnsmasq
Dec 07 09:55:57 compute-1 groupadd[167408]: new group: name=dnsmasq, GID=992
Dec 07 09:55:57 compute-1 useradd[167415]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 07 09:55:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:55:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:57.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:55:57 compute-1 dbus-broker-launch[740]: Noticed file-system modification, trigger reload.
Dec 07 09:55:57 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 07 09:55:57 compute-1 dbus-broker-launch[740]: Noticed file-system modification, trigger reload.
Dec 07 09:55:58 compute-1 ceph-mon[80077]: pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:55:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:55:58 compute-1 sudo[167426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:55:58 compute-1 sudo[167426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:55:58 compute-1 sudo[167426]: pam_unix(sudo:session): session closed for user root
Dec 07 09:55:58 compute-1 groupadd[167454]: group added to /etc/group: name=clevis, GID=991
Dec 07 09:55:58 compute-1 groupadd[167454]: group added to /etc/gshadow: name=clevis
Dec 07 09:55:58 compute-1 groupadd[167454]: new group: name=clevis, GID=991
Dec 07 09:55:58 compute-1 useradd[167461]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 07 09:55:58 compute-1 usermod[167471]: add 'clevis' to group 'tss'
Dec 07 09:55:58 compute-1 usermod[167471]: add 'clevis' to shadow group 'tss'
Dec 07 09:55:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:55:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:55:59.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:55:59 compute-1 podman[167489]: 2025-12-07 09:55:59.896686523 +0000 UTC m=+0.166324146 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 07 09:55:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:55:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:55:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:55:59.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:56:00 compute-1 ceph-mon[80077]: pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:56:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:01.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095601 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:56:01 compute-1 polkitd[43436]: Reloading rules
Dec 07 09:56:01 compute-1 polkitd[43436]: Collecting garbage unconditionally...
Dec 07 09:56:01 compute-1 polkitd[43436]: Loading rules from directory /etc/polkit-1/rules.d
Dec 07 09:56:01 compute-1 polkitd[43436]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 07 09:56:01 compute-1 polkitd[43436]: Finished loading, compiling and executing 3 rules
Dec 07 09:56:01 compute-1 polkitd[43436]: Reloading rules
Dec 07 09:56:01 compute-1 polkitd[43436]: Collecting garbage unconditionally...
Dec 07 09:56:01 compute-1 polkitd[43436]: Loading rules from directory /etc/polkit-1/rules.d
Dec 07 09:56:01 compute-1 polkitd[43436]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 07 09:56:01 compute-1 polkitd[43436]: Finished loading, compiling and executing 3 rules
Dec 07 09:56:01 compute-1 anacron[4155]: Job `cron.monthly' started
Dec 07 09:56:01 compute-1 anacron[4155]: Job `cron.monthly' terminated
Dec 07 09:56:01 compute-1 anacron[4155]: Normal exit (3 jobs run)
Dec 07 09:56:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:01.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:02 compute-1 ceph-mon[80077]: pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:56:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:03.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:03 compute-1 groupadd[167687]: group added to /etc/group: name=ceph, GID=167
Dec 07 09:56:03 compute-1 groupadd[167687]: group added to /etc/gshadow: name=ceph
Dec 07 09:56:03 compute-1 groupadd[167687]: new group: name=ceph, GID=167
Dec 07 09:56:03 compute-1 useradd[167693]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Dec 07 09:56:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:03.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:04 compute-1 ceph-mon[80077]: pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:56:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:56:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:05.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:05 compute-1 ceph-mon[80077]: pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:56:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:05.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:05 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 6.
Dec 07 09:56:05 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:56:05 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.471s CPU time.
Dec 07 09:56:05 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:56:06 compute-1 systemd[1]: Stopping OpenSSH server daemon...
Dec 07 09:56:06 compute-1 sshd[1007]: Received signal 15; terminating.
Dec 07 09:56:06 compute-1 systemd[1]: sshd.service: Deactivated successfully.
Dec 07 09:56:06 compute-1 systemd[1]: sshd.service: Unit process 167701 (sshd-session) remains running after unit stopped.
Dec 07 09:56:06 compute-1 systemd[1]: sshd.service: Unit process 167702 (sshd-session) remains running after unit stopped.
Dec 07 09:56:06 compute-1 systemd[1]: Stopped OpenSSH server daemon.
Dec 07 09:56:06 compute-1 systemd[1]: sshd.service: Consumed 3.194s CPU time, 34.7M memory peak, read 32.0K from disk, written 0B to disk.
Dec 07 09:56:06 compute-1 systemd[1]: Stopped target sshd-keygen.target.
Dec 07 09:56:06 compute-1 systemd[1]: Stopping sshd-keygen.target...
Dec 07 09:56:06 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 07 09:56:06 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 07 09:56:06 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 07 09:56:06 compute-1 systemd[1]: Reached target sshd-keygen.target.
Dec 07 09:56:06 compute-1 systemd[1]: Starting OpenSSH server daemon...
Dec 07 09:56:06 compute-1 sshd[168345]: Server listening on 0.0.0.0 port 22.
Dec 07 09:56:06 compute-1 sshd[168345]: Server listening on :: port 22.
Dec 07 09:56:06 compute-1 systemd[1]: Started OpenSSH server daemon.
Dec 07 09:56:06 compute-1 podman[168389]: 2025-12-07 09:56:06.219239457 +0000 UTC m=+0.048127449 container create 1b7ee8f05cd702b7f96b6b31042f9cd301f9dd5351308df376041203a43e77c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 07 09:56:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19290d3d7d306b21dea28fa5fbb286c4244a288a1453bf9c182d1038bf9b9fb5/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 09:56:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19290d3d7d306b21dea28fa5fbb286c4244a288a1453bf9c182d1038bf9b9fb5/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:56:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19290d3d7d306b21dea28fa5fbb286c4244a288a1453bf9c182d1038bf9b9fb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:56:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19290d3d7d306b21dea28fa5fbb286c4244a288a1453bf9c182d1038bf9b9fb5/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:56:06 compute-1 podman[168389]: 2025-12-07 09:56:06.276520857 +0000 UTC m=+0.105408879 container init 1b7ee8f05cd702b7f96b6b31042f9cd301f9dd5351308df376041203a43e77c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec 07 09:56:06 compute-1 podman[168389]: 2025-12-07 09:56:06.282217578 +0000 UTC m=+0.111105570 container start 1b7ee8f05cd702b7f96b6b31042f9cd301f9dd5351308df376041203a43e77c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 07 09:56:06 compute-1 bash[168389]: 1b7ee8f05cd702b7f96b6b31042f9cd301f9dd5351308df376041203a43e77c1
Dec 07 09:56:06 compute-1 podman[168389]: 2025-12-07 09:56:06.196486812 +0000 UTC m=+0.025374834 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:56:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:06 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 09:56:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:06 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 09:56:06 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:56:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:06 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 09:56:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:06 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 09:56:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:06 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 09:56:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:06 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 09:56:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:06 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 09:56:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:06 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:56:06 compute-1 sudo[168495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:56:06 compute-1 sudo[168495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:56:06 compute-1 sudo[168495]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:06 compute-1 sudo[168526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:56:06 compute-1 sudo[168526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:56:07 compute-1 sudo[168526]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:07.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:07.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:08 compute-1 ceph-mon[80077]: pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:56:08 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:56:08 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:56:09 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 07 09:56:09 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 07 09:56:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:09.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:09 compute-1 systemd[1]: Reloading.
Dec 07 09:56:09 compute-1 systemd-rc-local-generator[168773]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:56:09 compute-1 systemd-sysv-generator[168776]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:56:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095609 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:56:09 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 07 09:56:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:09.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:56:10 compute-1 ceph-mon[80077]: pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:56:10 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:56:10 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:56:10 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:56:10 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:56:10 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:56:10 compute-1 podman[169975]: 2025-12-07 09:56:10.555918054 +0000 UTC m=+0.061740999 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 07 09:56:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:11.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:11.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:11 compute-1 sudo[149424]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:12 compute-1 ceph-mon[80077]: pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Dec 07 09:56:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:12 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:56:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:12 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:56:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:12 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 07 09:56:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:13.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:56:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:13.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:14 compute-1 sshd-session[167701]: Connection closed by 103.29.69.96 port 38302 [preauth]
Dec 07 09:56:14 compute-1 ceph-mon[80077]: pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Dec 07 09:56:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:56:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:15.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:15 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:56:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:15 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:56:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:15 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:56:15 compute-1 sudo[175933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:56:15 compute-1 sudo[175933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:56:15 compute-1 sudo[175933]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:15.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:16 compute-1 ceph-mon[80077]: pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:56:16 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:56:16 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:56:17 compute-1 sudo[177348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foysvxxvbckedxpqjjtxslfjiacfyxbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101376.5851269-969-8340439489351/AnsiballZ_systemd.py'
Dec 07 09:56:17 compute-1 sudo[177348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:17.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:17 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 07 09:56:17 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 07 09:56:17 compute-1 systemd[1]: man-db-cache-update.service: Consumed 10.043s CPU time.
Dec 07 09:56:17 compute-1 systemd[1]: run-rb85f8ba5e9294b7383cce1420e41858a.service: Deactivated successfully.
Dec 07 09:56:17 compute-1 ceph-mon[80077]: pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:56:17 compute-1 python3.9[177350]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 07 09:56:17 compute-1 systemd[1]: Reloading.
Dec 07 09:56:17 compute-1 systemd-rc-local-generator[177378]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:56:17 compute-1 systemd-sysv-generator[177381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:56:17 compute-1 sudo[177348]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:17.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:18 compute-1 sudo[177540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehmpdnmsrmtucztzwgbtrztdwrigxmtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101377.9479072-969-197405925792504/AnsiballZ_systemd.py'
Dec 07 09:56:18 compute-1 sudo[177540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:18 compute-1 python3.9[177542]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 07 09:56:18 compute-1 systemd[1]: Reloading.
Dec 07 09:56:18 compute-1 systemd-rc-local-generator[177572]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:56:18 compute-1 systemd-sysv-generator[177576]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:56:18 compute-1 sudo[177580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:56:18 compute-1 sudo[177580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:56:18 compute-1 sudo[177580]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:18 compute-1 sudo[177540]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:19.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:19 compute-1 sudo[177755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rryeiahhskvoknmxghjobdzzyikyskri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101379.0181954-969-42605451190345/AnsiballZ_systemd.py'
Dec 07 09:56:19 compute-1 sudo[177755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:19 compute-1 python3.9[177757]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 07 09:56:19 compute-1 systemd[1]: Reloading.
Dec 07 09:56:19 compute-1 systemd-sysv-generator[177790]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:56:19 compute-1 systemd-rc-local-generator[177786]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:56:19 compute-1 ceph-mon[80077]: pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 09:56:19 compute-1 sudo[177755]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:19.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:56:20 compute-1 sudo[177945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-othgglyauxbubmvjxmiyhanmtjnqeies ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101380.0915372-969-267606297141688/AnsiballZ_systemd.py'
Dec 07 09:56:20 compute-1 sudo[177945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:20 compute-1 python3.9[177947]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 07 09:56:20 compute-1 systemd[1]: Reloading.
Dec 07 09:56:20 compute-1 systemd-rc-local-generator[177974]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:56:20 compute-1 systemd-sysv-generator[177978]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:56:21 compute-1 sudo[177945]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:21.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:56:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4bc000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.002000052s ======
Dec 07 09:56:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 07 09:56:21 compute-1 ceph-mon[80077]: pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 07 09:56:21 compute-1 sudo[178151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjhrgknqhzvulhletoulwqynezohruae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101381.6137946-1056-87622176905801/AnsiballZ_systemd.py'
Dec 07 09:56:21 compute-1 sudo[178151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:22 compute-1 python3.9[178153]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:22 compute-1 systemd[1]: Reloading.
Dec 07 09:56:22 compute-1 systemd-rc-local-generator[178184]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:56:22 compute-1 systemd-sysv-generator[178188]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:56:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:22 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:22 compute-1 sudo[178151]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:23 compute-1 sudo[178342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlaefmbqgwwxivfktrfaqbshhqjqufxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101382.824216-1056-79185988830757/AnsiballZ_systemd.py'
Dec 07 09:56:23 compute-1 sudo[178342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:23.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:23 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:23 compute-1 python3.9[178344]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:23 compute-1 systemd[1]: Reloading.
Dec 07 09:56:23 compute-1 systemd-rc-local-generator[178374]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:56:23 compute-1 systemd-sysv-generator[178377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:56:23 compute-1 sudo[178342]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:23 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:23.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:23 compute-1 ceph-mon[80077]: pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 682 B/s wr, 2 op/s
Dec 07 09:56:24 compute-1 sudo[178533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdotgybbvkjbwqborssgqqtjmcloojtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101383.8512814-1056-76897030555636/AnsiballZ_systemd.py'
Dec 07 09:56:24 compute-1 sudo[178533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:24 compute-1 python3.9[178535]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:24 compute-1 systemd[1]: Reloading.
Dec 07 09:56:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:24 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:24 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:56:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:24 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:56:24 compute-1 systemd-rc-local-generator[178567]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:56:24 compute-1 systemd-sysv-generator[178570]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:56:24 compute-1 sudo[178533]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:56:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:25.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095625 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:56:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:25 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:25 compute-1 sudo[178724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbzprapwdqsvwxaxsubpmuumkfvyrwgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101384.9880886-1056-86341015890756/AnsiballZ_systemd.py'
Dec 07 09:56:25 compute-1 sudo[178724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:25 compute-1 python3.9[178726]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:25 compute-1 sudo[178724]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:25 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:25.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:25 compute-1 ceph-mon[80077]: pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec 07 09:56:26 compute-1 sudo[178879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjeathtjgfdwpwfzjqpyjslsiduqntma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101385.7741315-1056-55817057734084/AnsiballZ_systemd.py'
Dec 07 09:56:26 compute-1 sudo[178879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:26 compute-1 python3.9[178881]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:26 compute-1 systemd[1]: Reloading.
Dec 07 09:56:26 compute-1 systemd-sysv-generator[178916]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:56:26 compute-1 systemd-rc-local-generator[178913]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:56:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:26 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490001b40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:26 compute-1 sudo[178879]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:27.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:27 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:27 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:56:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:27 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:27.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:28 compute-1 ceph-mon[80077]: pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec 07 09:56:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:56:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:28 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:29.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:29 compute-1 sudo[179071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzhoaiquanvpezqeduabmtaosyzvdbql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101388.9095278-1164-132119387484792/AnsiballZ_systemd.py'
Dec 07 09:56:29 compute-1 sudo[179071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:29 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490001b40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:29 compute-1 python3.9[179073]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 07 09:56:29 compute-1 systemd[1]: Reloading.
Dec 07 09:56:29 compute-1 systemd-rc-local-generator[179104]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:56:29 compute-1 systemd-sysv-generator[179107]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:56:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:29 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:29 compute-1 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 07 09:56:29 compute-1 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 07 09:56:29 compute-1 sudo[179071]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:29.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:30 compute-1 ceph-mon[80077]: pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Dec 07 09:56:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:56:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:30 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:30 compute-1 podman[179217]: 2025-12-07 09:56:30.586727006 +0000 UTC m=+0.083386245 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 07 09:56:30 compute-1 sudo[179293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqwkphpuqqgigsaxtxdtfyqferedcqve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101390.3309386-1188-245662134402173/AnsiballZ_systemd.py'
Dec 07 09:56:30 compute-1 sudo[179293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:30 compute-1 python3.9[179295]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:30 compute-1 sudo[179293]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:56:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:31.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:56:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:31 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:31 compute-1 sudo[179448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezbagjopimvafmjuhuqzveixxmgfnqwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101391.1372721-1188-39278347484079/AnsiballZ_systemd.py'
Dec 07 09:56:31 compute-1 sudo[179448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095631 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:56:31 compute-1 python3.9[179450]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:31 compute-1 sudo[179448]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:31 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490001b40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:31.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:32 compute-1 ceph-mon[80077]: pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 07 09:56:32 compute-1 sudo[179604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfkgxzxyatbvixhcjeoimfioagrcfise ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101391.9366186-1188-217044069738484/AnsiballZ_systemd.py'
Dec 07 09:56:32 compute-1 sudo[179604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:32 compute-1 python3.9[179606]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:32 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490001b40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:32 compute-1 sudo[179604]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:33 compute-1 sudo[179759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywpokpfgafiqdaokblkwnfdhhwlfmtkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101392.7536228-1188-121138421998040/AnsiballZ_systemd.py'
Dec 07 09:56:33 compute-1 sudo[179759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:33.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:33 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:33 compute-1 python3.9[179761]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:33 compute-1 sudo[179759]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:33 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:33 compute-1 sudo[179914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-todsjlvisjynlhoaozdhrhzdwkmfyxac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101393.5663865-1188-147362413432830/AnsiballZ_systemd.py'
Dec 07 09:56:33 compute-1 sudo[179914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:33.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:34 compute-1 python3.9[179916]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:34 compute-1 ceph-mon[80077]: pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Dec 07 09:56:34 compute-1 sudo[179914]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:34 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:34 compute-1 sudo[180070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duvjspjamdxcnadkgkntlvlqylzsmioc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101394.34198-1188-197800712941992/AnsiballZ_systemd.py'
Dec 07 09:56:34 compute-1 sudo[180070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:34 compute-1 python3.9[180072]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:35 compute-1 sudo[180070]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:56:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:35.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:35 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498002b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:35 compute-1 sudo[180225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtfxptxpsadoonbwwwzranjjhovxptsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101395.190212-1188-206010332215308/AnsiballZ_systemd.py'
Dec 07 09:56:35 compute-1 sudo[180225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:35 compute-1 python3.9[180227]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:35 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:35 compute-1 sudo[180225]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:36 compute-1 ceph-mon[80077]: pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Dec 07 09:56:36 compute-1 sudo[180383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsyttvckwicpfryqrhqasuiungaunjjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101396.0313835-1188-90073453300699/AnsiballZ_systemd.py'
Dec 07 09:56:36 compute-1 sudo[180383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:36 compute-1 sshd-session[180255]: Connection closed by authenticating user root 104.248.193.130 port 42764 [preauth]
Dec 07 09:56:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:36 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:36 compute-1 python3.9[180385]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:36 compute-1 sudo[180383]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:37 compute-1 sudo[180538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cypsqmlpzohlllnkuddrybfuvwvffugm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101396.8844025-1188-89814075550692/AnsiballZ_systemd.py'
Dec 07 09:56:37 compute-1 sudo[180538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:37.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:37 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c001820 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:37 compute-1 python3.9[180540]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:37 compute-1 sudo[180538]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:37 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498002b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:37 compute-1 sudo[180693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulzhmzdmlripdsbygrwkoacgbfzmgmta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101397.6722279-1188-95233129826000/AnsiballZ_systemd.py'
Dec 07 09:56:37 compute-1 sudo[180693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:38 compute-1 ceph-mon[80077]: pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 07 09:56:38 compute-1 python3.9[180695]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:38 compute-1 sudo[180693]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:38 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:56:38.627 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 09:56:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:56:38.628 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 09:56:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:56:38.628 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 09:56:38 compute-1 sudo[180849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnbhqzolvbsqgdvyfybkyoavzogyokoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101398.536753-1188-90449360806317/AnsiballZ_systemd.py'
Dec 07 09:56:38 compute-1 sudo[180849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:39 compute-1 sudo[180852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:56:39 compute-1 sudo[180852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:56:39 compute-1 sudo[180852]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:39 compute-1 python3.9[180851]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:39.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:39 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4900032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:39 compute-1 sudo[180849]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:39 compute-1 sudo[181029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdxkwrcxzqvufrkrwdlwxwjkwlpmbcug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101399.3840604-1188-75734808238854/AnsiballZ_systemd.py'
Dec 07 09:56:39 compute-1 sudo[181029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:39 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c003880 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:39 compute-1 python3.9[181031]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:40 compute-1 sudo[181029]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:56:40 compute-1 ceph-mon[80077]: pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 07 09:56:40 compute-1 sudo[181185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqdvzpynyninsyhdeogipmqjntbuunrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101400.1769953-1188-81830086402887/AnsiballZ_systemd.py'
Dec 07 09:56:40 compute-1 sudo[181185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:40 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:40 compute-1 python3.9[181187]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:40 compute-1 sudo[181185]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:40 compute-1 podman[181189]: 2025-12-07 09:56:40.790561503 +0000 UTC m=+0.054228440 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 07 09:56:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:41.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:41 compute-1 sudo[181359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpqltcqttmazdcyexqfsbqyqirufmxtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101400.9233549-1188-105009496202278/AnsiballZ_systemd.py'
Dec 07 09:56:41 compute-1 sudo[181359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:41 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:41 compute-1 python3.9[181361]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 07 09:56:41 compute-1 sudo[181359]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:41 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:41.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:42 compute-1 ceph-mon[80077]: pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 170 B/s wr, 1 op/s
Dec 07 09:56:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:42 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c003880 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:43.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:43 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:56:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:43 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:43.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:44 compute-1 ceph-mon[80077]: pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:56:44 compute-1 sudo[181516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdzdroyupxbbibrbctgfuwlenfwsxauy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101404.111148-1494-70311777260125/AnsiballZ_file.py'
Dec 07 09:56:44 compute-1 sudo[181516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:44 compute-1 python3.9[181518]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:56:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:44 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:44 compute-1 sudo[181516]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:45 compute-1 sudo[181668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouqfzxonjvsxlqirvdqhqsdborknrwom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101404.7305405-1494-98712825509766/AnsiballZ_file.py'
Dec 07 09:56:45 compute-1 sudo[181668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:56:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:45.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:45 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c003880 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:45 compute-1 python3.9[181670]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:56:45 compute-1 sudo[181668]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:45 compute-1 sudo[181820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utafpqydrnjdeaaibbifzupbezawjkub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101405.4314892-1494-4222500487987/AnsiballZ_file.py'
Dec 07 09:56:45 compute-1 sudo[181820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:45 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:45 compute-1 python3.9[181822]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:56:45 compute-1 sudo[181820]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:45.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:46 compute-1 sudo[181973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzrulvqvhfdvtbmfkdkfsoftjkpgaxrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101406.030597-1494-179848785962789/AnsiballZ_file.py'
Dec 07 09:56:46 compute-1 sudo[181973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:46 compute-1 ceph-mon[80077]: pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:56:46 compute-1 python3.9[181975]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:56:46 compute-1 sudo[181973]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:46 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:46 compute-1 sudo[182125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yujkvtuwfjbhtuahrcihglmnasebeswm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101406.6904237-1494-55083619134562/AnsiballZ_file.py'
Dec 07 09:56:46 compute-1 sudo[182125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:47 compute-1 python3.9[182127]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:56:47 compute-1 sudo[182125]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:47.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:47 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:47 compute-1 sudo[182277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvglxkvjpazwpzgdpayfiksyiphqhszo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101407.327228-1494-64604531028552/AnsiballZ_file.py'
Dec 07 09:56:47 compute-1 sudo[182277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:47 compute-1 python3.9[182279]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:56:47 compute-1 sudo[182277]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:47 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c003880 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:48 compute-1 ceph-mon[80077]: pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:56:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:48 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:49.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:49 compute-1 sudo[182430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfvcyuvmsteaazrdnmubyoqkyjiqvgrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101408.792592-1623-4180583344833/AnsiballZ_stat.py'
Dec 07 09:56:49 compute-1 sudo[182430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:49 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:49 compute-1 python3.9[182432]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:56:49 compute-1 sudo[182430]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:49 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:49 compute-1 sudo[182555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iokdxraqlbqdksmnwbevkkzpxmomtiyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101408.792592-1623-4180583344833/AnsiballZ_copy.py'
Dec 07 09:56:49 compute-1 sudo[182555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:56:50 compute-1 python3.9[182557]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101408.792592-1623-4180583344833/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:56:50 compute-1 sudo[182555]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:50 compute-1 ceph-mon[80077]: pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:56:50 compute-1 sudo[182708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgqqsqzwrudxcbqyxjvjnijwnejvxavm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101410.312988-1623-29724971071358/AnsiballZ_stat.py'
Dec 07 09:56:50 compute-1 sudo[182708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:50 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c003880 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:50 compute-1 python3.9[182710]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:56:50 compute-1 sudo[182708]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:51.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:51 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:51 compute-1 sudo[182833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksjgfqmixovwkwllqwqecnnrzzkvxruo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101410.312988-1623-29724971071358/AnsiballZ_copy.py'
Dec 07 09:56:51 compute-1 sudo[182833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:51 compute-1 ceph-mon[80077]: pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:56:51 compute-1 python3.9[182835]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101410.312988-1623-29724971071358/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:56:51 compute-1 sudo[182833]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:51 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:51.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:52 compute-1 sudo[182985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dulrcoglgemqytqmmzccmzzmzwqnycmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101411.720372-1623-170152981589115/AnsiballZ_stat.py'
Dec 07 09:56:52 compute-1 sudo[182985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:52 compute-1 python3.9[182987]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:56:52 compute-1 sudo[182985]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:52 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:52 compute-1 sudo[183111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwtyhxwjwoznnqfddkeckfatoelvhaeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101411.720372-1623-170152981589115/AnsiballZ_copy.py'
Dec 07 09:56:52 compute-1 sudo[183111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:52 compute-1 python3.9[183113]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101411.720372-1623-170152981589115/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:56:52 compute-1 sudo[183111]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:53.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:53 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c003880 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:53 compute-1 sudo[183265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynjbsjdlpzjlgmvakqtdypoccywxgsgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101413.1110578-1623-187653995249791/AnsiballZ_stat.py'
Dec 07 09:56:53 compute-1 sudo[183265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:53 compute-1 python3.9[183267]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:56:53 compute-1 sudo[183265]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:53 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:53 compute-1 ceph-mon[80077]: pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:56:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:54.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:54 compute-1 sudo[183390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifjrzgupyetpajthqvqnisptdsvyydiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101413.1110578-1623-187653995249791/AnsiballZ_copy.py'
Dec 07 09:56:54 compute-1 sudo[183390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:54 compute-1 python3.9[183392]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101413.1110578-1623-187653995249791/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:56:54 compute-1 sudo[183390]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:54 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:54 compute-1 sudo[183543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsuwvoygohihygvavthekwthvehknmjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101414.3938913-1623-167211671153680/AnsiballZ_stat.py'
Dec 07 09:56:54 compute-1 sudo[183543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:54 compute-1 python3.9[183545]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:56:54 compute-1 sudo[183543]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:56:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:56:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:55.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:56:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:55 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb484000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:55 compute-1 sudo[183668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pylwhfqfkspncrrremdfctdckokegfpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101414.3938913-1623-167211671153680/AnsiballZ_copy.py'
Dec 07 09:56:55 compute-1 sudo[183668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:55 compute-1 python3.9[183670]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101414.3938913-1623-167211671153680/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:56:55 compute-1 sudo[183668]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:55 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb484000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:55 compute-1 ceph-mon[80077]: pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:56:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:56:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:56.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:56:56 compute-1 sudo[183820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzqymdyjxawbrdxwdanaxgmyebnrvnsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101415.7672853-1623-188725225532932/AnsiballZ_stat.py'
Dec 07 09:56:56 compute-1 sudo[183820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:56 compute-1 python3.9[183822]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:56:56 compute-1 sudo[183820]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:56 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:56 compute-1 sudo[183946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elcweurxamqsuhpcppfitnvdwbiilrfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101415.7672853-1623-188725225532932/AnsiballZ_copy.py'
Dec 07 09:56:56 compute-1 sudo[183946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:56 compute-1 python3.9[183948]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101415.7672853-1623-188725225532932/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:56:56 compute-1 sudo[183946]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:57.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:57 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:57 compute-1 sudo[184098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnyygvhecuekfwfncihhlgyypnqanogd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101417.0550578-1623-35352939764782/AnsiballZ_stat.py'
Dec 07 09:56:57 compute-1 sudo[184098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:57 compute-1 python3.9[184100]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:56:57 compute-1 sudo[184098]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:57 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:57 compute-1 sudo[184221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-furdffxnjzjgmtneekglungjqlphmepv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101417.0550578-1623-35352939764782/AnsiballZ_copy.py'
Dec 07 09:56:57 compute-1 sudo[184221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:56:58.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:58 compute-1 ceph-mon[80077]: pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:56:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:56:58 compute-1 python3.9[184223]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101417.0550578-1623-35352939764782/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:56:58 compute-1 sudo[184221]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:58 compute-1 sudo[184374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vskwfdvlvtcmgtrmwpknwwxmcqnyexur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101418.29409-1623-254909337298680/AnsiballZ_stat.py'
Dec 07 09:56:58 compute-1 sudo[184374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:58 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:58 compute-1 python3.9[184376]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:56:58 compute-1 sudo[184374]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:59 compute-1 sudo[184499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmluidihhqjvlscqxdahvxzjonkljssi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101418.29409-1623-254909337298680/AnsiballZ_copy.py'
Dec 07 09:56:59 compute-1 sudo[184499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:56:59 compute-1 sudo[184502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:56:59 compute-1 sudo[184502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:56:59 compute-1 sudo[184502]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:56:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:56:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:56:59.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:56:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:59 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:56:59 compute-1 python3.9[184501]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765101418.29409-1623-254909337298680/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:56:59 compute-1 sudo[184499]: pam_unix(sudo:session): session closed for user root
Dec 07 09:56:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:56:59 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:00.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:00 compute-1 ceph-mon[80077]: pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:57:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:57:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:00 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:01.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:01 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:01 compute-1 sudo[184695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtudoshthhtklxupuqmwgbhdqyzguaqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101420.9299703-1962-4384959596449/AnsiballZ_command.py'
Dec 07 09:57:01 compute-1 sudo[184695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:01 compute-1 podman[184652]: 2025-12-07 09:57:01.283577641 +0000 UTC m=+0.093099743 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 07 09:57:01 compute-1 python3.9[184703]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 07 09:57:01 compute-1 sudo[184695]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:01 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:02.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:02 compute-1 ceph-mon[80077]: pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:57:02 compute-1 sudo[184858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxftmbqbgbxitrhlgxhwihggrthwhmhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101421.861877-1989-52466313293363/AnsiballZ_file.py'
Dec 07 09:57:02 compute-1 sudo[184858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:02 compute-1 python3.9[184860]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:02 compute-1 sudo[184858]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:02 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:02 compute-1 sudo[185010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhemjdzvqivhxxdkhyjhmsourqvgoors ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101422.548206-1989-260911097557194/AnsiballZ_file.py'
Dec 07 09:57:02 compute-1 sudo[185010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:03 compute-1 python3.9[185012]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:03 compute-1 sudo[185010]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:57:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:03.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:57:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:03 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:03 compute-1 sudo[185162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciwpglkpbzwtimmpjqcksfmwlvrfmtbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101423.224162-1989-69877419943426/AnsiballZ_file.py'
Dec 07 09:57:03 compute-1 sudo[185162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:03 compute-1 python3.9[185164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:03 compute-1 sudo[185162]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:03 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:04.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:04 compute-1 sudo[185315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qykujclybdxvcqqqfmfxejvbqdcttipw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101423.840864-1989-110226125715830/AnsiballZ_file.py'
Dec 07 09:57:04 compute-1 sudo[185315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:04 compute-1 ceph-mon[80077]: pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:57:04 compute-1 python3.9[185317]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:04 compute-1 sudo[185315]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:04 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:04 compute-1 sudo[185467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcfrqpmlpvuhpgjfnfuatbgukkhjwuhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101424.5976827-1989-212080522937330/AnsiballZ_file.py'
Dec 07 09:57:04 compute-1 sudo[185467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:57:05 compute-1 python3.9[185469]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:05 compute-1 sudo[185467]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:05.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:05 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:05 compute-1 sudo[185619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piutmiqwakssqpwkfsyjgloikhnoxdkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101425.3182638-1989-267638128010306/AnsiballZ_file.py'
Dec 07 09:57:05 compute-1 sudo[185619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:05 compute-1 python3.9[185621]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:05 compute-1 sudo[185619]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:05 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:06.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:06 compute-1 sudo[185772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukcbetwuduplobxnyxbiiseoxcrcylxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101425.9157145-1989-64269013311706/AnsiballZ_file.py'
Dec 07 09:57:06 compute-1 sudo[185772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:06 compute-1 ceph-mon[80077]: pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:57:06 compute-1 python3.9[185774]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:06 compute-1 sudo[185772]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:06 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:06 compute-1 sudo[185924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kavsxnqdwmgcdoouhpxbqshmgqmjvbnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101426.5793874-1989-42506847758953/AnsiballZ_file.py'
Dec 07 09:57:06 compute-1 sudo[185924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:07 compute-1 python3.9[185926]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:07 compute-1 sudo[185924]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:07.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:07 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:07 compute-1 sudo[186076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqzqrhjhhrtayuhbrimfjhyzlspajxes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101427.3171427-1989-88320749606809/AnsiballZ_file.py'
Dec 07 09:57:07 compute-1 sudo[186076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:07 compute-1 python3.9[186078]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:07 compute-1 sudo[186076]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:07 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:08.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:08 compute-1 sudo[186229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znqbzkinyulrapkxbwyarvpdrzajgigj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101427.9508665-1989-207349160550740/AnsiballZ_file.py'
Dec 07 09:57:08 compute-1 sudo[186229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:08 compute-1 ceph-mon[80077]: pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:57:08 compute-1 python3.9[186231]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:08 compute-1 sudo[186229]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:08 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:08 compute-1 sudo[186381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxyinakguvybkicmxhwqawpkypuvselu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101428.5960805-1989-139404542478132/AnsiballZ_file.py'
Dec 07 09:57:08 compute-1 sudo[186381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:09 compute-1 python3.9[186383]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:09 compute-1 sudo[186381]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:57:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:09.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:57:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:09 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:09 compute-1 sudo[186533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuelgucfqeitdpdruhkhpvtbrpxofgne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101429.2608552-1989-190622694036665/AnsiballZ_file.py'
Dec 07 09:57:09 compute-1 sudo[186533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:09 compute-1 python3.9[186535]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:09 compute-1 sudo[186533]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:09 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:10.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:57:10 compute-1 sudo[186686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfueovqozmtnbdtptbfxggrpmibkenhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101429.8598323-1989-60694415541964/AnsiballZ_file.py'
Dec 07 09:57:10 compute-1 sudo[186686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:10 compute-1 ceph-mon[80077]: pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:57:10 compute-1 python3.9[186688]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:10 compute-1 sudo[186686]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:10 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:10 compute-1 sudo[186838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvsyyzlioenxefqwdqmtprlqdrpmftwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101430.4528084-1989-74805306202452/AnsiballZ_file.py'
Dec 07 09:57:10 compute-1 sudo[186838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:10 compute-1 python3.9[186840]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:11 compute-1 sudo[186838]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:11.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:11 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0036e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:11 compute-1 podman[186865]: 2025-12-07 09:57:11.575569495 +0000 UTC m=+0.076455572 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 07 09:57:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:11 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498003870 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:12.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:12 compute-1 ceph-mon[80077]: pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 09:57:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:12 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:13 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:57:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:13.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:57:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:57:13 compute-1 sudo[187010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idlptvisreazslpjbjdrfasdqmxsdcrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101433.1006396-2286-173180686700204/AnsiballZ_stat.py'
Dec 07 09:57:13 compute-1 sudo[187010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:13 compute-1 python3.9[187012]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:13 compute-1 sudo[187010]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:13 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003860 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:14.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:14 compute-1 sudo[187134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvqhgpvzdddlqrosifnqxrkpgmsdgcxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101433.1006396-2286-173180686700204/AnsiballZ_copy.py'
Dec 07 09:57:14 compute-1 sudo[187134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:14 compute-1 ceph-mon[80077]: pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:57:14 compute-1 python3.9[187136]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101433.1006396-2286-173180686700204/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:14 compute-1 sudo[187134]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:14 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:15 compute-1 sudo[187286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfhsybfrbkrbdhghrutvvhuejftthbab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101434.6907585-2286-272559855835413/AnsiballZ_stat.py'
Dec 07 09:57:15 compute-1 sudo[187286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:57:15 compute-1 python3.9[187288]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:15 compute-1 sudo[187286]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:15 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:15.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:15 compute-1 sudo[187409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfzukwhultsxjxhaaryavmdjtenqxxvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101434.6907585-2286-272559855835413/AnsiballZ_copy.py'
Dec 07 09:57:15 compute-1 sudo[187409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:15 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:15 compute-1 python3.9[187411]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101434.6907585-2286-272559855835413/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:15 compute-1 sudo[187409]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:15 compute-1 sudo[187412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:57:15 compute-1 sudo[187412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:57:15 compute-1 sudo[187412]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:16 compute-1 sudo[187461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:57:16 compute-1 sudo[187461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:57:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:16.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:16 compute-1 ceph-mon[80077]: pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:57:16 compute-1 sudo[187626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crvtuvyiyrdebvsuuhvhzabcwsxlwmap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101436.058231-2286-39463609829853/AnsiballZ_stat.py'
Dec 07 09:57:16 compute-1 sudo[187626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:16 compute-1 python3.9[187629]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:16 compute-1 sudo[187626]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:16 compute-1 sudo[187461]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:16 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0039e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:16 compute-1 sudo[187766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkjufurrpleeuiripdlxoaldzkehivlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101436.058231-2286-39463609829853/AnsiballZ_copy.py'
Dec 07 09:57:16 compute-1 sudo[187766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:17 compute-1 python3.9[187768]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101436.058231-2286-39463609829853/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:17 compute-1 sudo[187766]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:17 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:57:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:17.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:57:17 compute-1 sudo[187918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaoncujcvadivpthtuvotafijhtgziol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101437.281658-2286-163019477555401/AnsiballZ_stat.py'
Dec 07 09:57:17 compute-1 sudo[187918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:17 compute-1 python3.9[187920]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:17 compute-1 sudo[187918]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:17 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:18.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:18 compute-1 sudo[188041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qebolnvmukyoycwhajzsvnqoagqhying ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101437.281658-2286-163019477555401/AnsiballZ_copy.py'
Dec 07 09:57:18 compute-1 sudo[188041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095718 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:57:18 compute-1 python3.9[188043]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101437.281658-2286-163019477555401/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:18 compute-1 sudo[188041]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:18 compute-1 ceph-mon[80077]: pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:57:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:18 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003f70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:18 compute-1 sudo[188194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgcvtgxhillmytqjsbyxxikxnxrsrquu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101438.4438531-2286-140724726273037/AnsiballZ_stat.py'
Dec 07 09:57:18 compute-1 sudo[188194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:18 compute-1 python3.9[188196]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:18 compute-1 sudo[188194]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:19 compute-1 sudo[188291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:57:19 compute-1 sudo[188291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:57:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:19 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0039e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:19 compute-1 sudo[188291]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:19.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:19 compute-1 sudo[188342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rntbhrbtqwhghydgvhvmpjezssgaxawf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101438.4438531-2286-140724726273037/AnsiballZ_copy.py'
Dec 07 09:57:19 compute-1 sudo[188342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:19 compute-1 python3.9[188344]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101438.4438531-2286-140724726273037/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:19 compute-1 sudo[188342]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:19 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0039e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:20 compute-1 sudo[188494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmamaaumozjzhwinfkwqxwpnonzevxsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101439.6846077-2286-216713129348331/AnsiballZ_stat.py'
Dec 07 09:57:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:20.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:20 compute-1 sudo[188494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:57:20 compute-1 ceph-mon[80077]: pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:57:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:57:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:57:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:57:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:57:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:57:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:57:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:57:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:57:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:57:20 compute-1 python3.9[188496]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:20 compute-1 sudo[188494]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:20 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:20 compute-1 sudo[188618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejhoiasyroyofpgtvluipqgesbyrxhml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101439.6846077-2286-216713129348331/AnsiballZ_copy.py'
Dec 07 09:57:20 compute-1 sudo[188618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:20 compute-1 python3.9[188620]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101439.6846077-2286-216713129348331/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:20 compute-1 sudo[188618]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003f90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:57:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:21.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:57:21 compute-1 sudo[188772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlhskujeursdlympyyrfaemfmgmqplwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101441.0421107-2286-105580203275389/AnsiballZ_stat.py'
Dec 07 09:57:21 compute-1 sudo[188772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:21 compute-1 python3.9[188774]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:21 compute-1 sudo[188772]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:21 compute-1 sshd-session[188697]: Connection closed by authenticating user root 104.248.193.130 port 51128 [preauth]
Dec 07 09:57:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:21 compute-1 sudo[188895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqjlxhdcaricefamzndoaebbrxtfhuua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101441.0421107-2286-105580203275389/AnsiballZ_copy.py'
Dec 07 09:57:21 compute-1 sudo[188895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:22.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:22 compute-1 python3.9[188897]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101441.0421107-2286-105580203275389/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:22 compute-1 sudo[188895]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:22 compute-1 ceph-mon[80077]: pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:57:22 compute-1 sudo[189048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkaolklygdhzbvifbnqchesjwaanlksj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101442.3212295-2286-82178463288792/AnsiballZ_stat.py'
Dec 07 09:57:22 compute-1 sudo[189048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:22 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0039e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:22 compute-1 python3.9[189050]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:22 compute-1 sudo[189048]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:23 compute-1 sudo[189171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohlmimirjeiletenpzryuamrgjxapybd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101442.3212295-2286-82178463288792/AnsiballZ_copy.py'
Dec 07 09:57:23 compute-1 sudo[189171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:23 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0039e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:23.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:23 compute-1 python3.9[189173]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101442.3212295-2286-82178463288792/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:23 compute-1 sudo[189171]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:23 compute-1 sudo[189323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryqdfdjbgtwyirqlqbkjidfqvyehrzhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101443.4943883-2286-69510139035149/AnsiballZ_stat.py'
Dec 07 09:57:23 compute-1 sudo[189323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:23 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003fb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:23 compute-1 python3.9[189325]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:23 compute-1 sudo[189323]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:24.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:24 compute-1 ceph-mon[80077]: pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:57:24 compute-1 sudo[189447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msfgtlceupajfpqvveetadwhsqwxdrfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101443.4943883-2286-69510139035149/AnsiballZ_copy.py'
Dec 07 09:57:24 compute-1 sudo[189447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:24 compute-1 sudo[189450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:57:24 compute-1 sudo[189450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:57:24 compute-1 sudo[189450]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:24 compute-1 python3.9[189449]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101443.4943883-2286-69510139035149/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:24 compute-1 sudo[189447]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:24 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:24 compute-1 sudo[189624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enlzsjkdamezyfxqjsbqzyzelsssctjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101444.669179-2286-111605502257124/AnsiballZ_stat.py'
Dec 07 09:57:24 compute-1 sudo[189624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:25 compute-1 python3.9[189626]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:57:25 compute-1 sudo[189624]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:25 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0039e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:25.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:57:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:57:25 compute-1 sudo[189747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nscvqybqoyjiiljqxjiwifoaptqxatdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101444.669179-2286-111605502257124/AnsiballZ_copy.py'
Dec 07 09:57:25 compute-1 sudo[189747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:25 compute-1 python3.9[189749]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101444.669179-2286-111605502257124/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:25 compute-1 sudo[189747]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:25 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0039e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:26.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:26 compute-1 sudo[189899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siowblcwuplrbfamhqklkcuvvnvnabiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101445.826653-2286-187341890787569/AnsiballZ_stat.py'
Dec 07 09:57:26 compute-1 sudo[189899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:26 compute-1 python3.9[189901]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:26 compute-1 sudo[189899]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:26 compute-1 ceph-mon[80077]: pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:57:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:26 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:57:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:26 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490003fd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:26 compute-1 sudo[190023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czjmysvkffqjxbrqptnyrmkrzriyhnjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101445.826653-2286-187341890787569/AnsiballZ_copy.py'
Dec 07 09:57:26 compute-1 sudo[190023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:26 compute-1 python3.9[190025]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101445.826653-2286-187341890787569/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:26 compute-1 sudo[190023]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:27 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:27.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:27 compute-1 sudo[190175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqbsutlbjalsxzsgzhputyaudyeeebhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101447.1644423-2286-16848034585738/AnsiballZ_stat.py'
Dec 07 09:57:27 compute-1 sudo[190175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:27 compute-1 python3.9[190177]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:27 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0039e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:27 compute-1 sudo[190175]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:28.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:28 compute-1 sudo[190299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qydhyqomqxfmcvemeiqxbqcntisggesx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101447.1644423-2286-16848034585738/AnsiballZ_copy.py'
Dec 07 09:57:28 compute-1 sudo[190299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:28 compute-1 ceph-mon[80077]: pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:57:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:57:28 compute-1 python3.9[190301]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101447.1644423-2286-16848034585738/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:28 compute-1 sudo[190299]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:28 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:28 compute-1 sudo[190451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwavyvqoxvsaxcrqhjdxffjcxenceevq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101448.5329413-2286-109611972348949/AnsiballZ_stat.py'
Dec 07 09:57:28 compute-1 sudo[190451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:28 compute-1 python3.9[190453]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:28 compute-1 sudo[190451]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:29 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:29.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:29 compute-1 sudo[190574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcaqklrngixogqnwczuajviuvivrclce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101448.5329413-2286-109611972348949/AnsiballZ_copy.py'
Dec 07 09:57:29 compute-1 sudo[190574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:29 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:57:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:29 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:57:29 compute-1 python3.9[190576]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101448.5329413-2286-109611972348949/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:29 compute-1 sudo[190574]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:29 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb498004580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:29 compute-1 sudo[190726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqfozvczdduksrhvmqfrqwznmkxrbprc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101449.7006085-2286-71688295212140/AnsiballZ_stat.py'
Dec 07 09:57:29 compute-1 sudo[190726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:57:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:30.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:57:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:57:30 compute-1 python3.9[190728]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:30 compute-1 sudo[190726]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:30 compute-1 ceph-mon[80077]: pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 07 09:57:30 compute-1 sudo[190850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxhnytgyiuttxdryocyzwelpcswxtvjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101449.7006085-2286-71688295212140/AnsiballZ_copy.py'
Dec 07 09:57:30 compute-1 sudo[190850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:30 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c0039e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:30 compute-1 python3.9[190852]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101449.7006085-2286-71688295212140/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:30 compute-1 sudo[190850]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:31 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:31.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:31 compute-1 podman[190880]: 2025-12-07 09:57:31.568155298 +0000 UTC m=+0.075072525 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 07 09:57:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:31 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:32.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:32 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:57:32 compute-1 ceph-mon[80077]: pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 09:57:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:32 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:33 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003bf0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:33.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:33 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:34.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:34 compute-1 ceph-mon[80077]: pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 09:57:34 compute-1 python3.9[191033]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:57:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:34 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490004030 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:57:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:35 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c002140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:35.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:35 compute-1 sudo[191186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lapqezmpmyobiijcgsfmcrgsrbqvtvng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101455.0971339-2904-40993200045822/AnsiballZ_seboolean.py'
Dec 07 09:57:35 compute-1 sudo[191186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:35 compute-1 python3.9[191188]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 07 09:57:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:35 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003bf0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:36.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:36 compute-1 ceph-mon[80077]: pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:57:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:36 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:36 compute-1 sudo[191186]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:37 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490004030 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:37.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:37 compute-1 sudo[191343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhquqcttgiyswdflrtfcrenbdjaaofub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101457.3481193-2928-61812480005742/AnsiballZ_copy.py'
Dec 07 09:57:37 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 07 09:57:37 compute-1 sudo[191343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:37 compute-1 python3.9[191345]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:37 compute-1 sudo[191343]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:37 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c002140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:38.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095738 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:57:38 compute-1 sudo[191496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldmgjjxqnmkaakznhbvikrclknuipaer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101457.982383-2928-105410449076266/AnsiballZ_copy.py'
Dec 07 09:57:38 compute-1 sudo[191496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:38 compute-1 ceph-mon[80077]: pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 09:57:38 compute-1 python3.9[191498]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:38 compute-1 sudo[191496]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:57:38.628 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 09:57:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:57:38.629 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 09:57:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:57:38.629 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 09:57:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:38 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:39 compute-1 sudo[191648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcjezjquyslgofobnelzvdqwscijibxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101458.6793838-2928-161463735418442/AnsiballZ_copy.py'
Dec 07 09:57:39 compute-1 sudo[191648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:39 compute-1 python3.9[191650]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:39 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:39.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:39 compute-1 sudo[191648]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:39 compute-1 sudo[191651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:57:39 compute-1 sudo[191651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:57:39 compute-1 sudo[191651]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:39 compute-1 sudo[191825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oktqthccqpqjskwlwdbdhzktiznyxtui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101459.4432888-2928-199170004284186/AnsiballZ_copy.py'
Dec 07 09:57:39 compute-1 sudo[191825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:39 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490004030 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:39 compute-1 python3.9[191827]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:39 compute-1 sudo[191825]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:40.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:57:40 compute-1 ceph-mon[80077]: pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 09:57:40 compute-1 sudo[191978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-risdgclxnxgllmgzpejbsujvlazvwowl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101460.1407852-2928-89198174109166/AnsiballZ_copy.py'
Dec 07 09:57:40 compute-1 sudo[191978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:40 compute-1 python3.9[191980]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:40 compute-1 sudo[191978]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:40 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c002140 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:41 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:57:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:41.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:57:41 compute-1 sudo[192140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpvanldryptaqhsdqmohtopjatxknzgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101461.360532-3036-201841071780267/AnsiballZ_copy.py'
Dec 07 09:57:41 compute-1 sudo[192140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:41 compute-1 podman[192104]: 2025-12-07 09:57:41.752547398 +0000 UTC m=+0.086257669 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 07 09:57:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:41 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:41 compute-1 python3.9[192145]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:41 compute-1 sudo[192140]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:42.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:42 compute-1 sudo[192302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcmhnkkzwgyqhgcimfuvnkuwsdgnoqcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101462.0530546-3036-253443159996885/AnsiballZ_copy.py'
Dec 07 09:57:42 compute-1 sudo[192302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:42 compute-1 ceph-mon[80077]: pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 853 B/s wr, 2 op/s
Dec 07 09:57:42 compute-1 python3.9[192304]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:42 compute-1 sudo[192302]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:42 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490004030 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:42 compute-1 sudo[192454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoaqdkaohgjttypzguouduhibkugwgde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101462.6739042-3036-261212281695310/AnsiballZ_copy.py'
Dec 07 09:57:42 compute-1 sudo[192454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:43 compute-1 python3.9[192456]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:43 compute-1 sudo[192454]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:43 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490004030 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:57:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:43.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:57:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:57:43 compute-1 sudo[192606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vszguosynngrkglrxvbdqktnxhhyrzca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101463.297841-3036-275159109119419/AnsiballZ_copy.py'
Dec 07 09:57:43 compute-1 sudo[192606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:43 compute-1 python3.9[192608]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:43 compute-1 sudo[192606]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:43 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003c50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:44.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:44 compute-1 sudo[192759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfaatrcrzjkiddidfiouunqrzztaxtmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101463.9201245-3036-127674530449011/AnsiballZ_copy.py'
Dec 07 09:57:44 compute-1 sudo[192759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:44 compute-1 ceph-mon[80077]: pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:57:44 compute-1 python3.9[192761]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:44 compute-1 sudo[192759]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:44 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:57:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:45 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490004030 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:45.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:45 compute-1 sudo[192911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvrhtlxjbyhfatqsvebruzptcvyhejun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101465.1532412-3144-198173921815933/AnsiballZ_systemd.py'
Dec 07 09:57:45 compute-1 sudo[192911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:45 compute-1 python3.9[192913]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:57:45 compute-1 systemd[1]: Reloading.
Dec 07 09:57:45 compute-1 systemd-rc-local-generator[192942]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:57:45 compute-1 systemd-sysv-generator[192947]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:57:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:45 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:45 compute-1 ceph-mon[80077]: pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:57:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:46.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:46 compute-1 systemd[1]: Starting libvirt logging daemon socket...
Dec 07 09:57:46 compute-1 systemd[1]: Listening on libvirt logging daemon socket.
Dec 07 09:57:46 compute-1 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 07 09:57:46 compute-1 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 07 09:57:46 compute-1 systemd[1]: Starting libvirt logging daemon...
Dec 07 09:57:46 compute-1 systemd[1]: Started libvirt logging daemon.
Dec 07 09:57:46 compute-1 sudo[192911]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:46 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:46 compute-1 sudo[193105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgzfzzmlekrhcevhexlwaeuuttxaises ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101466.4178548-3144-176696701368038/AnsiballZ_systemd.py'
Dec 07 09:57:46 compute-1 sudo[193105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:46 compute-1 python3.9[193107]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:57:46 compute-1 systemd[1]: Reloading.
Dec 07 09:57:47 compute-1 systemd-rc-local-generator[193127]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:57:47 compute-1 systemd-sysv-generator[193132]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:57:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:47 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:57:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:47.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:57:47 compute-1 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 07 09:57:47 compute-1 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 07 09:57:47 compute-1 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 07 09:57:47 compute-1 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 07 09:57:47 compute-1 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 07 09:57:47 compute-1 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 07 09:57:47 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Dec 07 09:57:47 compute-1 systemd[1]: Started libvirt nodedev daemon.
Dec 07 09:57:47 compute-1 sudo[193105]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:47 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:47 compute-1 ceph-mon[80077]: pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:57:48 compute-1 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 07 09:57:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:48.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:48 compute-1 sudo[193322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivhyygpkuwqhrduqneyqrmxykcuxyivk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101467.540627-3144-35998469594987/AnsiballZ_systemd.py'
Dec 07 09:57:48 compute-1 sudo[193322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:48 compute-1 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 07 09:57:48 compute-1 python3.9[193325]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:57:48 compute-1 systemd[1]: Reloading.
Dec 07 09:57:48 compute-1 systemd-rc-local-generator[193352]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:57:48 compute-1 systemd-sysv-generator[193356]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:57:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:48 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:48 compute-1 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 07 09:57:48 compute-1 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 07 09:57:48 compute-1 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 07 09:57:48 compute-1 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 07 09:57:48 compute-1 systemd[1]: Starting libvirt proxy daemon...
Dec 07 09:57:48 compute-1 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 07 09:57:48 compute-1 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 07 09:57:48 compute-1 systemd[1]: Started libvirt proxy daemon.
Dec 07 09:57:48 compute-1 sudo[193322]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:49 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:57:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:49.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:57:49 compute-1 sudo[193543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqspapvbpqfrashrbstdmrfkhyntyoqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101469.075308-3144-166887146945098/AnsiballZ_systemd.py'
Dec 07 09:57:49 compute-1 sudo[193543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:49 compute-1 python3.9[193545]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:57:49 compute-1 systemd[1]: Reloading.
Dec 07 09:57:49 compute-1 systemd-rc-local-generator[193572]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:57:49 compute-1 systemd-sysv-generator[193576]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:57:49 compute-1 setroubleshoot[193295]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 19ddf4a8-a69b-4b1d-ab15-423ce0d78ca1
Dec 07 09:57:49 compute-1 setroubleshoot[193295]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 07 09:57:49 compute-1 setroubleshoot[193295]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 19ddf4a8-a69b-4b1d-ab15-423ce0d78ca1
Dec 07 09:57:49 compute-1 setroubleshoot[193295]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 07 09:57:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:49 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:50 compute-1 systemd[1]: Listening on libvirt locking daemon socket.
Dec 07 09:57:50 compute-1 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 07 09:57:50 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 07 09:57:50 compute-1 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 07 09:57:50 compute-1 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 07 09:57:50 compute-1 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 07 09:57:50 compute-1 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 07 09:57:50 compute-1 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 07 09:57:50 compute-1 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 07 09:57:50 compute-1 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 07 09:57:50 compute-1 ceph-mon[80077]: pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:57:50 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Dec 07 09:57:50 compute-1 systemd[1]: Started libvirt QEMU daemon.
Dec 07 09:57:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:57:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:50.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:57:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:57:50 compute-1 sudo[193543]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:50 compute-1 sudo[193760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lduhvjswjhkamenconyeaizxryhlioow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101470.2773695-3144-149392797306491/AnsiballZ_systemd.py'
Dec 07 09:57:50 compute-1 sudo[193760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:50 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:50 compute-1 python3.9[193762]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:57:50 compute-1 systemd[1]: Reloading.
Dec 07 09:57:50 compute-1 systemd-rc-local-generator[193790]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:57:50 compute-1 systemd-sysv-generator[193793]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:57:51 compute-1 systemd[1]: Starting libvirt secret daemon socket...
Dec 07 09:57:51 compute-1 systemd[1]: Listening on libvirt secret daemon socket.
Dec 07 09:57:51 compute-1 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 07 09:57:51 compute-1 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 07 09:57:51 compute-1 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 07 09:57:51 compute-1 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 07 09:57:51 compute-1 systemd[1]: Starting libvirt secret daemon...
Dec 07 09:57:51 compute-1 systemd[1]: Started libvirt secret daemon.
Dec 07 09:57:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:51 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:57:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:51.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:57:51 compute-1 sudo[193760]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:51 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003cf0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:52 compute-1 ceph-mon[80077]: pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:57:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:52.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:52 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490004050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:52 compute-1 sudo[193973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epnbwkiabafeeniwjhcvxmeyvvemqomv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101472.527952-3256-61182020892491/AnsiballZ_file.py'
Dec 07 09:57:52 compute-1 sudo[193973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:52 compute-1 python3.9[193975]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:52 compute-1 sudo[193973]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:53 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:53.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:53 compute-1 sudo[194125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndpitjztrppfdeostvyppsznwtpvbodt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101473.296758-3279-277576096235294/AnsiballZ_find.py'
Dec 07 09:57:53 compute-1 sudo[194125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:53 compute-1 python3.9[194127]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 07 09:57:53 compute-1 sudo[194125]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:53 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:54 compute-1 ceph-mon[80077]: pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:57:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:54.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:54 compute-1 sudo[194278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phozlvctgvabqwqinihxpuapqylrqvvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101474.130509-3303-219822163041285/AnsiballZ_command.py'
Dec 07 09:57:54 compute-1 sudo[194278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:54 compute-1 python3.9[194280]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:57:54 compute-1 sudo[194278]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:54 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:57:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:55 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490004070 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:55.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:55 compute-1 python3.9[194434]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 07 09:57:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:55 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:56.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:56 compute-1 ceph-mon[80077]: pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:57:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:56 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:56 compute-1 python3.9[194585]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:57:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:57 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003eb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:57:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:57.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:57:57 compute-1 python3.9[194706]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101476.2248542-3360-48227416742606/.source.xml follow=False _original_basename=secret.xml.j2 checksum=ec35f87f58a946e19c403a490b743bca3d89a26e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:57 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490004090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:57 compute-1 sudo[194856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwqkyuttihokslndqizljguuxlbhtwqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101477.6071637-3405-45462884991826/AnsiballZ_command.py'
Dec 07 09:57:57 compute-1 sudo[194856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:58 compute-1 python3.9[194858]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 75f4c9fd-539a-5e17-b55a-0a12a4e2736c
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:57:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:57:58.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:58 compute-1 polkitd[43436]: Registered Authentication Agent for unix-process:194861:343874 (system bus name :1.1859 [pkttyagent --process 194861 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 07 09:57:58 compute-1 polkitd[43436]: Unregistered Authentication Agent for unix-process:194861:343874 (system bus name :1.1859, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 07 09:57:58 compute-1 polkitd[43436]: Registered Authentication Agent for unix-process:194860:343874 (system bus name :1.1860 [pkttyagent --process 194860 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 07 09:57:58 compute-1 polkitd[43436]: Unregistered Authentication Agent for unix-process:194860:343874 (system bus name :1.1860, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 07 09:57:58 compute-1 ceph-mon[80077]: pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:57:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:57:58 compute-1 sudo[194856]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:58 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:59 compute-1 python3.9[195021]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:57:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:59 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:57:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:57:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:57:59.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:57:59 compute-1 sudo[195046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:57:59 compute-1 sudo[195046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:57:59 compute-1 sudo[195046]: pam_unix(sudo:session): session closed for user root
Dec 07 09:57:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:57:59 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003ed0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:57:59 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 07 09:57:59 compute-1 sudo[195196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thrkscqfymyowhrrejtwkjwwyqixthwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101479.4600656-3453-20989137118449/AnsiballZ_command.py'
Dec 07 09:57:59 compute-1 sudo[195196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:57:59 compute-1 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 07 09:58:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:00.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:00 compute-1 sudo[195196]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:58:00 compute-1 ceph-mon[80077]: pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:58:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:00 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003ed0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:00 compute-1 sudo[195350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zikukjcjwbkwrydywzcdidzpnroywntw ; FSID=75f4c9fd-539a-5e17-b55a-0a12a4e2736c KEY=AQASSzVpAAAAABAAHQ1Di7YjsYFnT8csFjJ07A== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101480.4951055-3477-146501845887371/AnsiballZ_command.py'
Dec 07 09:58:00 compute-1 sudo[195350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:00 compute-1 polkitd[43436]: Registered Authentication Agent for unix-process:195353:344157 (system bus name :1.1864 [pkttyagent --process 195353 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 07 09:58:00 compute-1 polkitd[43436]: Unregistered Authentication Agent for unix-process:195353:344157 (system bus name :1.1864, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 07 09:58:00 compute-1 sudo[195350]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:01 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003ed0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:01.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:01 compute-1 sudo[195518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfruwlujkoiamualgmgdbzhswtazhkwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101481.3835607-3501-65556776460270/AnsiballZ_copy.py'
Dec 07 09:58:01 compute-1 sudo[195518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:01 compute-1 auditd[703]: Audit daemon rotating log files
Dec 07 09:58:01 compute-1 podman[195482]: 2025-12-07 09:58:01.763935514 +0000 UTC m=+0.111326981 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 07 09:58:01 compute-1 python3.9[195522]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:01 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:01 compute-1 sudo[195518]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:01 compute-1 ceph-mon[80077]: pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:58:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:02.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:02 compute-1 sudo[195688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxhuocxtxuqwulqzlhktthgnplulkplp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101482.2429366-3525-258936944878973/AnsiballZ_stat.py'
Dec 07 09:58:02 compute-1 sudo[195688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:02 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:02 compute-1 python3.9[195690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:02 compute-1 sudo[195688]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:03 compute-1 sudo[195812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awijzatripqdyssdxmlfoaqwjlhuukjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101482.2429366-3525-258936944878973/AnsiballZ_copy.py'
Dec 07 09:58:03 compute-1 sudo[195812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:03 compute-1 python3.9[195814]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101482.2429366-3525-258936944878973/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:03 compute-1 sudo[195812]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:03 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c0030a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:03.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:03 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4980014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:04.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:04 compute-1 sudo[195965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgxurpfxhcskqlqrjfapuijyxrwwdgzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101483.868365-3573-86282261029569/AnsiballZ_file.py'
Dec 07 09:58:04 compute-1 sudo[195965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:04 compute-1 python3.9[195967]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:04 compute-1 sudo[195965]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:04 compute-1 ceph-mon[80077]: pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:58:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:04 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:05 compute-1 sudo[196117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjogumvmnsdvittoarwcjdbmtulcaesw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101484.7945976-3597-170546832232533/AnsiballZ_stat.py'
Dec 07 09:58:05 compute-1 sudo[196117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:58:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:05 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:05 compute-1 python3.9[196119]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:05.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:05 compute-1 sudo[196117]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:05 compute-1 ceph-mon[80077]: pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 09:58:05 compute-1 sudo[196197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwuqnsaeyleficmzunowbqmfwdxjxtpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101484.7945976-3597-170546832232533/AnsiballZ_file.py'
Dec 07 09:58:05 compute-1 sudo[196197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:05 compute-1 sshd-session[196120]: Connection closed by authenticating user root 104.248.193.130 port 51874 [preauth]
Dec 07 09:58:05 compute-1 python3.9[196199]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:05 compute-1 sudo[196197]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:05 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:06.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:06 compute-1 sudo[196350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnftsejnytqmlszgiiipstqmydvwhxsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101486.1854224-3633-185103680863653/AnsiballZ_stat.py'
Dec 07 09:58:06 compute-1 sudo[196350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:06 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4980014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:06 compute-1 python3.9[196352]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:06 compute-1 sudo[196350]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:06 compute-1 sudo[196428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acjhhbgsluzvfetdjredjeqwurbayfuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101486.1854224-3633-185103680863653/AnsiballZ_file.py'
Dec 07 09:58:06 compute-1 sudo[196428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:07 compute-1 python3.9[196430]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.k_chlvcv recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:07 compute-1 sudo[196428]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:07 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:07.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:07 compute-1 sudo[196580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqrtedjeezmnlfsykwlsyoaamhlvlzdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101487.5707874-3669-226718137204627/AnsiballZ_stat.py'
Dec 07 09:58:07 compute-1 sudo[196580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:07 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:07 compute-1 ceph-mon[80077]: pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:58:08 compute-1 python3.9[196582]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:08 compute-1 sudo[196580]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:08.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095808 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:58:08 compute-1 sudo[196661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awrcacntuebjmiyplfkxnmqikezwuvoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101487.5707874-3669-226718137204627/AnsiballZ_file.py'
Dec 07 09:58:08 compute-1 sudo[196661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:08 compute-1 python3.9[196663]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:08 compute-1 sudo[196661]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:08 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:09 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:09.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:09 compute-1 sudo[196813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhrxenurzveedsnaweiegrmtxbatcfnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101488.9786959-3708-119309835753275/AnsiballZ_command.py'
Dec 07 09:58:09 compute-1 sudo[196813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:09 compute-1 ceph-mon[80077]: pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:58:09 compute-1 python3.9[196815]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:58:09 compute-1 sudo[196813]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:09 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4bc002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:58:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:10.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:10 compute-1 sudo[196967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mslgwstyiucudgnytcjbtlhvswasoawe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765101489.9743326-3732-174306547884726/AnsiballZ_edpm_nftables_from_files.py'
Dec 07 09:58:10 compute-1 sudo[196967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:10 compute-1 python3[196969]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 07 09:58:10 compute-1 sudo[196967]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:10 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:11 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0001220 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:11.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:11 compute-1 sudo[197119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdjtajqoosgjrehciantlhlkitmtoivz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101490.9888983-3756-56672416119209/AnsiballZ_stat.py'
Dec 07 09:58:11 compute-1 sudo[197119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:11 compute-1 python3.9[197121]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:11 compute-1 sudo[197119]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:11 compute-1 sudo[197197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsgvpwshrjzzxeolubkkmmisnawzuhyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101490.9888983-3756-56672416119209/AnsiballZ_file.py'
Dec 07 09:58:11 compute-1 sudo[197197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:11 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:11 compute-1 podman[197199]: 2025-12-07 09:58:11.915538202 +0000 UTC m=+0.080958334 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 07 09:58:12 compute-1 python3.9[197200]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:12 compute-1 sudo[197197]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:12 compute-1 ceph-mon[80077]: pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:58:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:12.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:12 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4bc002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:12 compute-1 sudo[197370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urnhqkisliaztemzizohtvkdyykyvaji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101492.3937342-3792-196793375013917/AnsiballZ_stat.py'
Dec 07 09:58:12 compute-1 sudo[197370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:12 compute-1 python3.9[197372]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:13 compute-1 sudo[197370]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:58:13 compute-1 sudo[197448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwopbgknqlgrqoaxakrftnzmentkhlig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101492.3937342-3792-196793375013917/AnsiballZ_file.py'
Dec 07 09:58:13 compute-1 sudo[197448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:13 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4bc002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:13.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:13 compute-1 python3.9[197450]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:13 compute-1 sudo[197448]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:13 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0002050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:14 compute-1 sudo[197600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjjjciceolegwicdrnaoawlivphqwcuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101493.723754-3828-145955974252774/AnsiballZ_stat.py'
Dec 07 09:58:14 compute-1 sudo[197600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:14 compute-1 ceph-mon[80077]: pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:58:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:14.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:14 compute-1 python3.9[197602]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:14 compute-1 sudo[197600]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:14 compute-1 sudo[197679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsibwdlgjrudxevyudcrmcjeytjrsbfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101493.723754-3828-145955974252774/AnsiballZ_file.py'
Dec 07 09:58:14 compute-1 sudo[197679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:14 compute-1 python3.9[197681]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:14 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003f10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:14 compute-1 sudo[197679]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:58:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:15 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:15.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:15 compute-1 sudo[197831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhfiggbccbfrubhkjjpccrsgxnzeblta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101495.2119954-3864-220444708334600/AnsiballZ_stat.py'
Dec 07 09:58:15 compute-1 sudo[197831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:15 compute-1 python3.9[197833]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:15 compute-1 sudo[197831]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:15 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4bc0089d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:15 compute-1 sudo[197909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzkcpuhiadebgvdkkqsbmjrzlmychpvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101495.2119954-3864-220444708334600/AnsiballZ_file.py'
Dec 07 09:58:16 compute-1 sudo[197909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:16.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:16 compute-1 python3.9[197911]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:16 compute-1 sudo[197909]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:16 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0002050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:17 compute-1 ceph-mon[80077]: pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:58:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:17 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003f30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:17.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:17 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:18 compute-1 sudo[198062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcpenorcdngsuuvihkjrvbsefdrkgqif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101497.6014977-3900-163688365340624/AnsiballZ_stat.py'
Dec 07 09:58:18 compute-1 sudo[198062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:18 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:58:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:18.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:18 compute-1 ceph-mon[80077]: pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:58:18 compute-1 python3.9[198064]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:18 compute-1 sudo[198062]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:18 compute-1 sudo[198188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbdgkzfcmdioqevnjrnmjhhpsyeiyanw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101497.6014977-3900-163688365340624/AnsiballZ_copy.py'
Dec 07 09:58:18 compute-1 sudo[198188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:18 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:18 compute-1 python3.9[198190]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765101497.6014977-3900-163688365340624/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:18 compute-1 sudo[198188]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:19 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0002050 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:19.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:19 compute-1 sudo[198340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toewkfzkyzwohwxcrozvijstcvewulys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101499.0838933-3945-230152849353223/AnsiballZ_file.py'
Dec 07 09:58:19 compute-1 sudo[198340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:19 compute-1 ceph-mon[80077]: pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:58:19 compute-1 sudo[198343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:58:19 compute-1 python3.9[198342]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:19 compute-1 sudo[198343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:58:19 compute-1 sudo[198343]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:19 compute-1 sudo[198340]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:19 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003f50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:58:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:20.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:20 compute-1 sudo[198518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htvdlbjqxqzdraacorpcwqivoeexrolu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101499.8896468-3969-267216136354473/AnsiballZ_command.py'
Dec 07 09:58:20 compute-1 sudo[198518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:20 compute-1 python3.9[198520]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:58:20 compute-1 sudo[198518]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:20 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4bc0092f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.045245) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101501045300, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4649, "num_deletes": 502, "total_data_size": 12714210, "memory_usage": 12902888, "flush_reason": "Manual Compaction"}
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Dec 07 09:58:21 compute-1 sudo[198673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgrfzakexbmnansbfotmnusvxrcbejun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101500.6797788-3993-69163129270445/AnsiballZ_blockinfile.py'
Dec 07 09:58:21 compute-1 sudo[198673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:58:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101501097310, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 4743608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13487, "largest_seqno": 18131, "table_properties": {"data_size": 4731446, "index_size": 6992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4101, "raw_key_size": 32374, "raw_average_key_size": 19, "raw_value_size": 4702977, "raw_average_value_size": 2897, "num_data_blocks": 305, "num_entries": 1623, "num_filter_entries": 1623, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765101044, "oldest_key_time": 1765101044, "file_creation_time": 1765101501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 52107 microseconds, and 12558 cpu microseconds.
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.097357) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 4743608 bytes OK
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.097377) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.098944) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.098962) EVENT_LOG_v1 {"time_micros": 1765101501098957, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.098979) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12694125, prev total WAL file size 12694125, number of live WAL files 2.
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.101920) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(4632KB)], [27(12MB)]
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101501102000, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 18214834, "oldest_snapshot_seqno": -1}
Dec 07 09:58:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5171 keys, 14111723 bytes, temperature: kUnknown
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101501250033, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 14111723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14075000, "index_size": 22725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 129183, "raw_average_key_size": 24, "raw_value_size": 13979185, "raw_average_value_size": 2703, "num_data_blocks": 952, "num_entries": 5171, "num_filter_entries": 5171, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765101501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.250402) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 14111723 bytes
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.251956) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.0 rd, 95.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.5, 12.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(6.8) write-amplify(3.0) OK, records in: 5977, records dropped: 806 output_compression: NoCompression
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.252005) EVENT_LOG_v1 {"time_micros": 1765101501251987, "job": 14, "event": "compaction_finished", "compaction_time_micros": 148130, "compaction_time_cpu_micros": 29536, "output_level": 6, "num_output_files": 1, "total_output_size": 14111723, "num_input_records": 5977, "num_output_records": 5171, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101501254170, "job": 14, "event": "table_file_deletion", "file_number": 29}
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101501258996, "job": 14, "event": "table_file_deletion", "file_number": 27}
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.101834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.259117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.259127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.259130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.259132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:58:21 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:21.259135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:58:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:21 compute-1 python3.9[198675]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:21 compute-1 sudo[198673]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:21.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:21 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0003150 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:22 compute-1 sudo[198825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hingnnctjyhbfcuovsyfoneoypkethbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101501.7453988-4020-140590709591383/AnsiballZ_command.py'
Dec 07 09:58:22 compute-1 sudo[198825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:22 compute-1 ceph-mon[80077]: pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 07 09:58:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:22.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:22 compute-1 python3.9[198827]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:58:22 compute-1 sudo[198825]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:22 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003f70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:22 compute-1 sudo[198979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rewinsipqqzexrplxzsbftwjydeiyhzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101502.5530417-4044-49037817169042/AnsiballZ_stat.py'
Dec 07 09:58:22 compute-1 sudo[198979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:23 compute-1 python3.9[198981]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:58:23 compute-1 sudo[198979]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:23 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4bc0092f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:23.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:23 compute-1 sudo[199133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbvkthcvrwqxgjzpjyxpxruvhteomyrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101503.466091-4068-130625278330551/AnsiballZ_command.py'
Dec 07 09:58:23 compute-1 sudo[199133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:23 compute-1 python3.9[199135]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:58:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:23 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:23 compute-1 sudo[199133]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:24 compute-1 ceph-mon[80077]: pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 07 09:58:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:24.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:24 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:58:24 compute-1 sudo[199289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkavmxhymwatjefjneydgplckmrdwzps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101504.3012176-4092-139072952865399/AnsiballZ_file.py'
Dec 07 09:58:24 compute-1 sudo[199289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:24 compute-1 sudo[199290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:58:24 compute-1 sudo[199290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:58:24 compute-1 sudo[199290]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:24 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0003150 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:24 compute-1 sudo[199317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:58:24 compute-1 sudo[199317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:58:24 compute-1 python3.9[199298]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:24 compute-1 sudo[199289]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:58:25 compute-1 sudo[199317]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:25 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003f90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:25.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:25 compute-1 sudo[199524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pamhhhhwrjahjmktykhipukxofjhwlii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101505.1970904-4116-109800488887757/AnsiballZ_stat.py'
Dec 07 09:58:25 compute-1 sudo[199524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.561591) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101505561678, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 306, "num_deletes": 251, "total_data_size": 127716, "memory_usage": 134152, "flush_reason": "Manual Compaction"}
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101505565536, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 83637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18136, "largest_seqno": 18437, "table_properties": {"data_size": 81708, "index_size": 157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5144, "raw_average_key_size": 18, "raw_value_size": 77792, "raw_average_value_size": 280, "num_data_blocks": 7, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765101502, "oldest_key_time": 1765101502, "file_creation_time": 1765101505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 3998 microseconds, and 1512 cpu microseconds.
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.565595) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 83637 bytes OK
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.565649) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.567735) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.567759) EVENT_LOG_v1 {"time_micros": 1765101505567753, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.567779) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 125503, prev total WAL file size 125503, number of live WAL files 2.
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.568266) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(81KB)], [30(13MB)]
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101505568333, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 14195360, "oldest_snapshot_seqno": -1}
Dec 07 09:58:25 compute-1 python3.9[199526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:25 compute-1 sudo[199524]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:25 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4bc0092f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4935 keys, 11980006 bytes, temperature: kUnknown
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101505935302, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 11980006, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11946516, "index_size": 20068, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124947, "raw_average_key_size": 25, "raw_value_size": 11856366, "raw_average_value_size": 2402, "num_data_blocks": 835, "num_entries": 4935, "num_filter_entries": 4935, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765101505, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.935696) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 11980006 bytes
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.938629) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.7 rd, 32.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.5 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(313.0) write-amplify(143.2) OK, records in: 5448, records dropped: 513 output_compression: NoCompression
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.938658) EVENT_LOG_v1 {"time_micros": 1765101505938643, "job": 16, "event": "compaction_finished", "compaction_time_micros": 367079, "compaction_time_cpu_micros": 40929, "output_level": 6, "num_output_files": 1, "total_output_size": 11980006, "num_input_records": 5448, "num_output_records": 4935, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101505938821, "job": 16, "event": "table_file_deletion", "file_number": 32}
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101505941180, "job": 16, "event": "table_file_deletion", "file_number": 30}
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.568127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.941299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.941306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.941308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.941310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:58:25 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-09:58:25.941311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 09:58:26 compute-1 sudo[199647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-przdrjxbuniykszsuycfffulcojixoka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101505.1970904-4116-109800488887757/AnsiballZ_copy.py'
Dec 07 09:58:26 compute-1 sudo[199647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:26.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:26 compute-1 python3.9[199650]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101505.1970904-4116-109800488887757/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:26 compute-1 sudo[199647]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:26 compute-1 ceph-mon[80077]: pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:58:26 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:58:26 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:58:26 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:58:26 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:58:26 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:58:26 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:58:26 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:58:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:26 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:26 compute-1 sudo[199800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhfovbrpfplcezbduurpaabeeyyxjcuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101506.6667504-4161-84305134307397/AnsiballZ_stat.py'
Dec 07 09:58:26 compute-1 sudo[199800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:27 compute-1 python3.9[199802]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:27 compute-1 sudo[199800]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:27 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0003150 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:27.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:27 compute-1 sudo[199923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujposnaohoyrfopklmielhafobhzcsib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101506.6667504-4161-84305134307397/AnsiballZ_copy.py'
Dec 07 09:58:27 compute-1 sudo[199923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:27 compute-1 ceph-mon[80077]: pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:58:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:58:27 compute-1 python3.9[199925]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101506.6667504-4161-84305134307397/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:27 compute-1 sudo[199923]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:27 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0003150 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:28.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:28 compute-1 sudo[200076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqqxhjqcvxmhfbxwjpbplohrjvfcpfkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101508.0185788-4206-16172185622944/AnsiballZ_stat.py'
Dec 07 09:58:28 compute-1 sudo[200076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:28 compute-1 python3.9[200078]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:28 compute-1 sudo[200076]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:28 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4bc00a3f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:28 compute-1 sudo[200199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seholiathcyywbobgunvkscxevgnxgay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101508.0185788-4206-16172185622944/AnsiballZ_copy.py'
Dec 07 09:58:28 compute-1 sudo[200199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:29 compute-1 python3.9[200201]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101508.0185788-4206-16172185622944/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:29 compute-1 sudo[200199]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:29 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:29.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:29 compute-1 sudo[200351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlmovhoewlexlrlwlawegdvpzyfgxaku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101509.4569316-4251-118159072326459/AnsiballZ_systemd.py'
Dec 07 09:58:29 compute-1 sudo[200351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:29 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b0003150 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:30 compute-1 ceph-mon[80077]: pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:58:30 compute-1 python3.9[200353]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:58:30 compute-1 systemd[1]: Reloading.
Dec 07 09:58:30 compute-1 systemd-rc-local-generator[200379]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:58:30 compute-1 systemd-sysv-generator[200384]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:58:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:58:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:30.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095830 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:58:30 compute-1 systemd[1]: Reached target edpm_libvirt.target.
Dec 07 09:58:30 compute-1 sudo[200351]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:30 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003f90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:31 compute-1 sudo[200543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsxdqecxcydbvvsxaelxcotxvcytvuna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101510.9407573-4275-9610808060900/AnsiballZ_systemd.py'
Dec 07 09:58:31 compute-1 sudo[200543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:31 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003f90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:31.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:31 compute-1 python3.9[200545]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 07 09:58:31 compute-1 systemd[1]: Reloading.
Dec 07 09:58:31 compute-1 systemd-rc-local-generator[200572]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:58:31 compute-1 systemd-sysv-generator[200577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:58:31 compute-1 systemd[1]: Reloading.
Dec 07 09:58:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:31 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:31 compute-1 podman[200584]: 2025-12-07 09:58:31.953897313 +0000 UTC m=+0.096985711 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 07 09:58:31 compute-1 systemd-rc-local-generator[200633]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:58:31 compute-1 systemd-sysv-generator[200637]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:58:32 compute-1 ceph-mon[80077]: pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:58:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:32.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:32 compute-1 sudo[200543]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:32 compute-1 sudo[200669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:58:32 compute-1 sudo[200669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:58:32 compute-1 sudo[200669]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:32 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b00045e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:32 compute-1 sshd-session[142834]: Connection closed by 192.168.122.30 port 43046
Dec 07 09:58:32 compute-1 sshd-session[142821]: pam_unix(sshd:session): session closed for user zuul
Dec 07 09:58:32 compute-1 systemd[1]: session-52.scope: Deactivated successfully.
Dec 07 09:58:32 compute-1 systemd[1]: session-52.scope: Consumed 3min 25.664s CPU time.
Dec 07 09:58:32 compute-1 systemd-logind[796]: Session 52 logged out. Waiting for processes to exit.
Dec 07 09:58:32 compute-1 systemd-logind[796]: Removed session 52.
Dec 07 09:58:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:33 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003f90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:58:33 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:58:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:33.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:33 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4bc00a3f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:34.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:34 compute-1 ceph-mon[80077]: pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec 07 09:58:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:34 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:58:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:35 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:35.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:35 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003f90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:36.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:36 compute-1 ceph-mon[80077]: pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 07 09:58:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:36 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4bc00a3f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:37 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4bc00a3f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:37.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:37 compute-1 ceph-mon[80077]: pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:58:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:37 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b00045e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:38.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:58:38.630 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 09:58:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:58:38.631 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 09:58:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:58:38.631 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 09:58:38 compute-1 sshd-session[200697]: Accepted publickey for zuul from 192.168.122.30 port 55354 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 09:58:38 compute-1 systemd-logind[796]: New session 53 of user zuul.
Dec 07 09:58:38 compute-1 systemd[1]: Started Session 53 of User zuul.
Dec 07 09:58:38 compute-1 sshd-session[200697]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 09:58:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:38 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003f90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:39 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb47c003f90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:39.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:39 compute-1 sudo[200854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:58:39 compute-1 sudo[200854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:58:39 compute-1 sudo[200854]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:39 compute-1 python3.9[200853]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 09:58:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:39 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:40 compute-1 ceph-mon[80077]: pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:58:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:58:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:40.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:40 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb48c001230 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:41 compute-1 python3.9[201033]: ansible-ansible.builtin.service_facts Invoked
Dec 07 09:58:41 compute-1 network[201050]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 07 09:58:41 compute-1 network[201051]: 'network-scripts' will be removed from distribution in near future.
Dec 07 09:58:41 compute-1 network[201052]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 07 09:58:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:41 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb490002070 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:41.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:41 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4980014d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:42 compute-1 podman[201059]: 2025-12-07 09:58:42.110701273 +0000 UTC m=+0.057334872 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 07 09:58:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:42.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:42 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:58:42 compute-1 ceph-mon[80077]: pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:58:43 compute-1 kernel: ganesha.nfsd[184551]: segfault at 50 ip 00007fb567a2932e sp 00007fb5357f9210 error 4 in libntirpc.so.5.8[7fb567a0e000+2c000] likely on CPU 7 (core 0, socket 7)
Dec 07 09:58:43 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 09:58:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[168417]: 07/12/2025 09:58:43 : epoch 69354f36 : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4a8004000 fd 42 proxy ignored for local
Dec 07 09:58:43 compute-1 systemd[1]: Started Process Core Dump (PID 201160/UID 0).
Dec 07 09:58:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:43.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:58:43 compute-1 ceph-mon[80077]: pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:58:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:44.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:44 compute-1 systemd-coredump[201162]: Process 168425 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 58:
                                                    #0  0x00007fb567a2932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 09:58:44 compute-1 systemd[1]: systemd-coredump@6-201160-0.service: Deactivated successfully.
Dec 07 09:58:44 compute-1 podman[201224]: 2025-12-07 09:58:44.413307408 +0000 UTC m=+0.021982528 container died 1b7ee8f05cd702b7f96b6b31042f9cd301f9dd5351308df376041203a43e77c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 07 09:58:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-19290d3d7d306b21dea28fa5fbb286c4244a288a1453bf9c182d1038bf9b9fb5-merged.mount: Deactivated successfully.
Dec 07 09:58:44 compute-1 podman[201224]: 2025-12-07 09:58:44.453953985 +0000 UTC m=+0.062629085 container remove 1b7ee8f05cd702b7f96b6b31042f9cd301f9dd5351308df376041203a43e77c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 07 09:58:44 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 09:58:44 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 09:58:44 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.490s CPU time.
Dec 07 09:58:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:58:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:58:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:45.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:58:45 compute-1 sudo[201391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaeyjmyynoqhgxlrnuziikrdsjaaihmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101525.2603927-102-21195256554703/AnsiballZ_setup.py'
Dec 07 09:58:45 compute-1 sudo[201391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:45 compute-1 python3.9[201393]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 07 09:58:45 compute-1 ceph-mon[80077]: pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:58:46 compute-1 sudo[201391]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:46.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:46 compute-1 sudo[201476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rumdgfwtvtalnngkhwamfhiovpsqihtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101525.2603927-102-21195256554703/AnsiballZ_dnf.py'
Dec 07 09:58:46 compute-1 sudo[201476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:46 compute-1 python3.9[201478]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 09:58:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:47.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:47 compute-1 ceph-mon[80077]: pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:58:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:48.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095849 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:58:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:49.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:49 compute-1 ceph-mon[80077]: pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:58:49 compute-1 sshd-session[201481]: Invalid user postgres from 104.248.193.130 port 52044
Dec 07 09:58:49 compute-1 sshd-session[201481]: Connection closed by invalid user postgres 104.248.193.130 port 52044 [preauth]
Dec 07 09:58:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:58:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:50.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095850 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:58:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:58:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:51.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:58:52 compute-1 ceph-mon[80077]: pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:58:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:52.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:52 compute-1 sudo[201476]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:53.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:53 compute-1 sudo[201634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfexvypziglcrsgbyziwuzipxdritbim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101533.1542048-138-206163945228484/AnsiballZ_stat.py'
Dec 07 09:58:53 compute-1 sudo[201634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:53 compute-1 python3.9[201636]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:58:53 compute-1 sudo[201634]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:54.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:54 compute-1 ceph-mon[80077]: pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 07 09:58:54 compute-1 sudo[201787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zddcanltihuudqkgridxhthafbrngiib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101534.181397-168-252071545334294/AnsiballZ_command.py'
Dec 07 09:58:54 compute-1 sudo[201787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:54 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 7.
Dec 07 09:58:54 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:58:54 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.490s CPU time.
Dec 07 09:58:54 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 09:58:54 compute-1 python3.9[201789]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:58:54 compute-1 sudo[201787]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:54 compute-1 podman[201860]: 2025-12-07 09:58:54.949040272 +0000 UTC m=+0.038753825 container create 0ae6a0223306e17d3258772ea8eac2cd3c3469724f0f54488336428d40c51fd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 07 09:58:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a841881b1b7b3723f1b7aee828a2151cdbfe83d070478a96889de21a60bebb99/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 09:58:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a841881b1b7b3723f1b7aee828a2151cdbfe83d070478a96889de21a60bebb99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 09:58:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a841881b1b7b3723f1b7aee828a2151cdbfe83d070478a96889de21a60bebb99/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:58:54 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a841881b1b7b3723f1b7aee828a2151cdbfe83d070478a96889de21a60bebb99/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 09:58:55 compute-1 podman[201860]: 2025-12-07 09:58:55.009011754 +0000 UTC m=+0.098725347 container init 0ae6a0223306e17d3258772ea8eac2cd3c3469724f0f54488336428d40c51fd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 09:58:55 compute-1 podman[201860]: 2025-12-07 09:58:55.013770323 +0000 UTC m=+0.103483886 container start 0ae6a0223306e17d3258772ea8eac2cd3c3469724f0f54488336428d40c51fd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 07 09:58:55 compute-1 bash[201860]: 0ae6a0223306e17d3258772ea8eac2cd3c3469724f0f54488336428d40c51fd5
Dec 07 09:58:55 compute-1 podman[201860]: 2025-12-07 09:58:54.931505745 +0000 UTC m=+0.021219338 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 09:58:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:58:55 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 09:58:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:58:55 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 09:58:55 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 09:58:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:58:55 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 09:58:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:58:55 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 09:58:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:58:55 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 09:58:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:58:55 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 09:58:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:58:55 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 09:58:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:58:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:58:55 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:58:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:55.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:55 compute-1 sudo[202043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opbsmrlpqzrfhfsmodnxgzqlcvolwrgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101535.2272365-198-229799198811328/AnsiballZ_stat.py'
Dec 07 09:58:55 compute-1 sudo[202043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:55 compute-1 python3.9[202045]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:58:55 compute-1 sudo[202043]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:56.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:56 compute-1 ceph-mon[80077]: pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 09:58:56 compute-1 sudo[202196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxiosirjurnavzlbafffugpiidulybgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101536.0381434-222-97144686269075/AnsiballZ_command.py'
Dec 07 09:58:56 compute-1 sudo[202196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:56 compute-1 python3.9[202198]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:58:56 compute-1 sudo[202196]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:57 compute-1 sudo[202349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxmvhxalfwgnersznllrahmafvcaojcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101536.9030664-246-37869646615372/AnsiballZ_stat.py'
Dec 07 09:58:57 compute-1 sudo[202349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:57 compute-1 python3.9[202351]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:58:57 compute-1 sudo[202349]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:57.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:57 compute-1 sudo[202472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eylltrfdltyrkeawrzoeytnwuowgmpiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101536.9030664-246-37869646615372/AnsiballZ_copy.py'
Dec 07 09:58:57 compute-1 sudo[202472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:58 compute-1 python3.9[202474]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101536.9030664-246-37869646615372/.source.iscsi _original_basename=.hl_iyrk3 follow=False checksum=222cdf48a8a7a39b44226edd1025f57c9ab51398 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:58 compute-1 sudo[202472]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:58:58.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:58 compute-1 ceph-mon[80077]: pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 07 09:58:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:58:58 compute-1 sudo[202625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgibchleqnbqlkqepvxzqlvmmyrqaexh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101538.304661-291-128217128488107/AnsiballZ_file.py'
Dec 07 09:58:58 compute-1 sudo[202625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:58 compute-1 python3.9[202627]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:58 compute-1 sudo[202625]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:58:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:58:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:58:59.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:58:59 compute-1 sudo[202777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtfquheenqmofwxvzgmjqbwdvwtuvgsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101539.2009335-315-180085613493605/AnsiballZ_lineinfile.py'
Dec 07 09:58:59 compute-1 sudo[202777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:58:59 compute-1 ceph-mon[80077]: pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 07 09:58:59 compute-1 sudo[202780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:58:59 compute-1 sudo[202780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:58:59 compute-1 sudo[202780]: pam_unix(sudo:session): session closed for user root
Dec 07 09:58:59 compute-1 python3.9[202779]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:58:59 compute-1 sudo[202777]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:59:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:00.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:00 compute-1 sudo[202955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrtibfdszqkhllcqvnbncjumdhxfcthq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101540.206648-342-16354185077228/AnsiballZ_systemd_service.py'
Dec 07 09:59:00 compute-1 sudo[202955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:01 compute-1 python3.9[202957]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:59:01 compute-1 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 07 09:59:01 compute-1 sudo[202955]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:01 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:59:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:01 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:59:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:01.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:01 compute-1 sudo[203111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooudtfvlrtgnrnoiskepdtcgnkveradx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101541.463029-366-53150194757070/AnsiballZ_systemd_service.py'
Dec 07 09:59:01 compute-1 sudo[203111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:01 compute-1 ceph-mon[80077]: pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 09:59:02 compute-1 python3.9[203113]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:59:02 compute-1 systemd[1]: Reloading.
Dec 07 09:59:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:02.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:02 compute-1 systemd-rc-local-generator[203160]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:59:02 compute-1 systemd-sysv-generator[203164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:59:02 compute-1 podman[203117]: 2025-12-07 09:59:02.305388727 +0000 UTC m=+0.100644690 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 07 09:59:02 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 07 09:59:02 compute-1 systemd[1]: Starting Open-iSCSI...
Dec 07 09:59:02 compute-1 kernel: Loading iSCSI transport class v2.0-870.
Dec 07 09:59:02 compute-1 systemd[1]: Started Open-iSCSI.
Dec 07 09:59:02 compute-1 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 07 09:59:02 compute-1 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 07 09:59:02 compute-1 sudo[203111]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:03.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:03 compute-1 sudo[203337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nchniqubkwmxjjmvpxdewrevmnqwrtzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101543.328478-399-251410209906166/AnsiballZ_service_facts.py'
Dec 07 09:59:03 compute-1 sudo[203337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:03 compute-1 python3.9[203339]: ansible-ansible.builtin.service_facts Invoked
Dec 07 09:59:03 compute-1 network[203356]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 07 09:59:03 compute-1 network[203357]: 'network-scripts' will be removed from distribution in near future.
Dec 07 09:59:03 compute-1 network[203358]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 07 09:59:03 compute-1 ceph-mon[80077]: pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 09:59:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:04.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:04 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 07 09:59:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:04 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:59:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:04 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:59:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:04 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:59:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:59:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:05.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:05 compute-1 ceph-mon[80077]: pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:59:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:06.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e88000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:59:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:07.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:59:07 compute-1 sudo[203337]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:07 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:08.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:08 compute-1 ceph-mon[80077]: pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:59:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:08 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e64000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095909 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:59:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:09 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e60000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:59:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:09.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:59:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:09 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:59:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:10.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:10 compute-1 ceph-mon[80077]: pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:59:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:10 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780021e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:11 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780021e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:59:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:11.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:59:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:11 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e600016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:12.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095912 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:59:12 compute-1 sudo[203659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swvmusbiyyqpohuqnxsyfhrcnewcwglx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101552.0627189-429-16027408336715/AnsiballZ_file.py'
Dec 07 09:59:12 compute-1 sudo[203659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:12 compute-1 ceph-mon[80077]: pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 07 09:59:12 compute-1 podman[203623]: 2025-12-07 09:59:12.429713474 +0000 UTC m=+0.079014352 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 07 09:59:12 compute-1 python3.9[203666]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 07 09:59:12 compute-1 sudo[203659]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:12 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:13 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e640016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:13 compute-1 sudo[203818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjfdvmrsfclxoqyjhzbcooxnkuruinwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101552.9188938-453-194229852182737/AnsiballZ_modprobe.py'
Dec 07 09:59:13 compute-1 sudo[203818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:13.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:59:13 compute-1 python3.9[203820]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 07 09:59:13 compute-1 sudo[203818]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:13 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780021e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:14 compute-1 sudo[203975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhmlqwtgigpttoennufihzjfxpkcjwlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101553.8867714-477-184480593906900/AnsiballZ_stat.py'
Dec 07 09:59:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:14.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:14 compute-1 sudo[203975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:14 compute-1 python3.9[203977]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:59:14 compute-1 sudo[203975]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:14 compute-1 ceph-mon[80077]: pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Dec 07 09:59:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:14 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e600016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:14 compute-1 sudo[204098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwltjefjykerrefymzizuwppioabxmec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101553.8867714-477-184480593906900/AnsiballZ_copy.py'
Dec 07 09:59:14 compute-1 sudo[204098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:15 compute-1 python3.9[204100]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101553.8867714-477-184480593906900/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:15 compute-1 sudo[204098]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:59:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:15 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:15.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:15 compute-1 ceph-mon[80077]: pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Dec 07 09:59:15 compute-1 sudo[204250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytqhnzappekieivnrugiwtqpgvjyrqty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101555.4299786-525-26759712358277/AnsiballZ_lineinfile.py'
Dec 07 09:59:15 compute-1 sudo[204250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:15 compute-1 python3.9[204252]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:15 compute-1 sudo[204250]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:15 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e64001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:16.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:16 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e600016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:16 compute-1 sudo[204403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyftbwfhcfvrpyiwulfoicijfrkfhagv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101556.253602-549-110536399117716/AnsiballZ_systemd.py'
Dec 07 09:59:16 compute-1 sudo[204403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:17 compute-1 python3.9[204405]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 09:59:17 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 07 09:59:17 compute-1 systemd[1]: Stopped Load Kernel Modules.
Dec 07 09:59:17 compute-1 systemd[1]: Stopping Load Kernel Modules...
Dec 07 09:59:17 compute-1 systemd[1]: Starting Load Kernel Modules...
Dec 07 09:59:17 compute-1 systemd[1]: Finished Load Kernel Modules.
Dec 07 09:59:17 compute-1 sudo[204403]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:17 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e600016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:17.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:17 compute-1 sudo[204559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlfpplbpzppjfpszstkimnornupzhdqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101557.554568-573-259884683943614/AnsiballZ_file.py'
Dec 07 09:59:17 compute-1 sudo[204559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:17 compute-1 ceph-mon[80077]: pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:59:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:17 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:17 compute-1 python3.9[204561]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:59:17 compute-1 sudo[204559]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:18.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:18 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:18 compute-1 sudo[204712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxdianswsthzzhumbiagrglrwtjcduhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101558.4146829-600-174408501383620/AnsiballZ_stat.py'
Dec 07 09:59:18 compute-1 sudo[204712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:18 compute-1 python3.9[204714]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:59:19 compute-1 sudo[204712]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:19 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e58000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:59:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:19.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:59:19 compute-1 sudo[204864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgrrflzchorqnicnlawrbwzwqjdspngp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101559.2947233-627-188969304029877/AnsiballZ_stat.py'
Dec 07 09:59:19 compute-1 sudo[204864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:19 compute-1 python3.9[204866]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:59:19 compute-1 sudo[204867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:59:19 compute-1 sudo[204867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:59:19 compute-1 sudo[204867]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:19 compute-1 sudo[204864]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:19 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e64001fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:19 compute-1 ceph-mon[80077]: pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:59:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:59:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:59:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:20.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:59:20 compute-1 sudo[205042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzlfyrwxpwbzdsbonpdiruzyjoyihwcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101560.0933135-651-211622264662466/AnsiballZ_stat.py'
Dec 07 09:59:20 compute-1 sudo[205042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:20 compute-1 python3.9[205044]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:59:20 compute-1 sudo[205042]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:20 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:21 compute-1 sudo[205165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuprcokkllnhzqbbcjrltcoxxfyrinqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101560.0933135-651-211622264662466/AnsiballZ_copy.py'
Dec 07 09:59:21 compute-1 sudo[205165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:21 compute-1 python3.9[205167]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101560.0933135-651-211622264662466/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:21 compute-1 sudo[205165]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:21 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:21.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:21 compute-1 sudo[205317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxidfpppbclxkhjievmmidelirrzectk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101561.5725317-696-229597600986978/AnsiballZ_command.py'
Dec 07 09:59:21 compute-1 sudo[205317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:21 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:21 compute-1 ceph-mon[80077]: pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:59:22 compute-1 python3.9[205319]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 09:59:22 compute-1 sudo[205317]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:22.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:22 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:22 compute-1 sudo[205471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxymzudiqtbmutehihpztyavokoaelek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101562.513412-720-211946698467650/AnsiballZ_lineinfile.py'
Dec 07 09:59:22 compute-1 sudo[205471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:23 compute-1 python3.9[205473]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:23 compute-1 sudo[205471]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:23 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:23.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:23 compute-1 sudo[205623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvkatwflyoefamnrlkvvqcgjtsizbvqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101563.3553815-744-46647088687324/AnsiballZ_replace.py'
Dec 07 09:59:23 compute-1 sudo[205623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:23 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:24 compute-1 ceph-mon[80077]: pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:59:24 compute-1 python3.9[205625]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:24 compute-1 sudo[205623]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:24.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:24 compute-1 sudo[205776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjorgjwfueyektghnrnkyqttlthmaqfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101564.3076465-769-253979872975835/AnsiballZ_replace.py'
Dec 07 09:59:24 compute-1 sudo[205776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:24 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:24 compute-1 python3.9[205778]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:24 compute-1 sudo[205776]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:59:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:25 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:25.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:25 compute-1 sudo[205928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysohccuzrjicntiyfmyfohmsjbgxnmau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101565.263801-795-34786144811170/AnsiballZ_lineinfile.py'
Dec 07 09:59:25 compute-1 sudo[205928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:25 compute-1 python3.9[205930]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:25 compute-1 sudo[205928]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:25 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:26 compute-1 ceph-mon[80077]: pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 09:59:26 compute-1 sudo[206081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcrnckzlbizsrjkpefqvrebgsdnuufdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101565.9523551-795-144817906146508/AnsiballZ_lineinfile.py'
Dec 07 09:59:26 compute-1 sudo[206081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:26.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:26 compute-1 python3.9[206083]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:26 compute-1 sudo[206081]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:26 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:26 compute-1 sudo[206233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtpufenyvecaqbhmlgrdxhqsfqjeijom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101566.5930126-795-109820222124200/AnsiballZ_lineinfile.py'
Dec 07 09:59:26 compute-1 sudo[206233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:27 compute-1 python3.9[206235]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:27 compute-1 sudo[206233]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:27 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:59:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:27.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:59:27 compute-1 sudo[206385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujkfpczzszjsdrbmockvwhredluqvnwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101567.2760568-795-58585630306657/AnsiballZ_lineinfile.py'
Dec 07 09:59:27 compute-1 sudo[206385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095927 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:59:27 compute-1 python3.9[206387]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:27 compute-1 sudo[206385]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:27 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:28 compute-1 ceph-mon[80077]: pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:59:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:59:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:28.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:28 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:28 compute-1 sudo[206538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gejwklivkahxtiyllynruhkbfougkyid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101568.3726866-882-229388901236159/AnsiballZ_stat.py'
Dec 07 09:59:28 compute-1 sudo[206538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:29 compute-1 python3.9[206540]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:59:29 compute-1 sudo[206538]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:29 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:29.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:29 compute-1 sudo[206692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wawnkxqasgjaetxmmplocgrjgyitbgba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101569.4305208-906-2775928871993/AnsiballZ_file.py'
Dec 07 09:59:29 compute-1 sudo[206692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:29 compute-1 python3.9[206694]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:29 compute-1 sudo[206692]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:29 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:30 compute-1 ceph-mon[80077]: pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 09:59:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:59:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:30.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095930 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 09:59:30 compute-1 sudo[206845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuvpegmjxvjxhlktznocherekxbypsum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101570.4165354-933-126594106195068/AnsiballZ_file.py'
Dec 07 09:59:30 compute-1 sudo[206845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:30 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e64003080 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:30 compute-1 python3.9[206847]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:59:30 compute-1 sudo[206845]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:31 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:31.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:31 compute-1 sudo[206997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqgjvgbdztypzaotdsffzvonmztgqpgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101571.297902-957-33550619888113/AnsiballZ_stat.py'
Dec 07 09:59:31 compute-1 sudo[206997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:31 compute-1 python3.9[206999]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:59:31 compute-1 sudo[206997]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:31 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:32 compute-1 ceph-mon[80077]: pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 09:59:32 compute-1 sudo[207076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovfsehekgzzpycczlosnrhjebbdqybhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101571.297902-957-33550619888113/AnsiballZ_file.py'
Dec 07 09:59:32 compute-1 sudo[207076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 09:59:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:32.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 09:59:32 compute-1 python3.9[207078]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:59:32 compute-1 sudo[207076]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:32 compute-1 sudo[207123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 09:59:32 compute-1 sudo[207123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:59:32 compute-1 sudo[207123]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:32 compute-1 sudo[207182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 09:59:32 compute-1 sudo[207182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:59:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:32 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:32 compute-1 podman[207171]: 2025-12-07 09:59:32.792600566 +0000 UTC m=+0.086870555 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 07 09:59:32 compute-1 sudo[207301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uogmcwoaikwwghnzjywpnyltnrcvpwmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101572.6357794-957-28681115857194/AnsiballZ_stat.py'
Dec 07 09:59:32 compute-1 sudo[207301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:33 compute-1 python3.9[207303]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:59:33 compute-1 sudo[207301]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:33 compute-1 sudo[207182]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:33 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e640039a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:33 compute-1 sudo[207412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gssvkhgdrodebqaczllldhdmmmsvxrxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101572.6357794-957-28681115857194/AnsiballZ_file.py'
Dec 07 09:59:33 compute-1 sudo[207412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:33.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:33 compute-1 python3.9[207414]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:59:33 compute-1 sudo[207412]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:33 compute-1 sshd-session[207319]: Invalid user oracle from 104.248.193.130 port 39806
Dec 07 09:59:33 compute-1 sshd-session[207319]: Connection closed by invalid user oracle 104.248.193.130 port 39806 [preauth]
Dec 07 09:59:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:33 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:34 compute-1 ceph-mon[80077]: pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Dec 07 09:59:34 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:59:34 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 09:59:34 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:59:34 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:59:34 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 09:59:34 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 09:59:34 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 09:59:34 compute-1 sudo[207565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvmqjctrrwvoiygholppaaiqihzldpto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101573.9659705-1027-3966801846471/AnsiballZ_file.py'
Dec 07 09:59:34 compute-1 sudo[207565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:34.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:34 compute-1 python3.9[207567]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:34 compute-1 sudo[207565]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:34 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:35 compute-1 sudo[207717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxietdlrbzkvdggaqhsddwenhcltzyej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101574.8298433-1051-61085081982395/AnsiballZ_stat.py'
Dec 07 09:59:35 compute-1 sudo[207717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:59:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:35 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:35 compute-1 python3.9[207719]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:59:35 compute-1 sudo[207717]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:35.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:35 compute-1 sudo[207795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjildfsnvstmrabyoduhubuihzvtjtgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101574.8298433-1051-61085081982395/AnsiballZ_file.py'
Dec 07 09:59:35 compute-1 sudo[207795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:35 compute-1 python3.9[207797]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:35 compute-1 sudo[207795]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:35 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e640039a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:36 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:59:36 compute-1 ceph-mon[80077]: pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:59:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:36.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:36 compute-1 sudo[207948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhhoqqzvnrytckgcfvueqtcsisakrutx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101576.2875237-1086-21894566652618/AnsiballZ_stat.py'
Dec 07 09:59:36 compute-1 sudo[207948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:36 compute-1 python3.9[207950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:59:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:36 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:36 compute-1 sudo[207948]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:37 compute-1 sudo[208026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgyuqkfbyzrhptwhotpopgwztotmdarg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101576.2875237-1086-21894566652618/AnsiballZ_file.py'
Dec 07 09:59:37 compute-1 sudo[208026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:37 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:37 compute-1 python3.9[208028]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:37 compute-1 sudo[208026]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:59:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:37.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:59:37 compute-1 sudo[208152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 09:59:37 compute-1 sudo[208152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:59:37 compute-1 sudo[208152]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:37 compute-1 sudo[208203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzekobsekbfpfkjwavozlrwogmqdwjrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101577.6437194-1122-104655548997226/AnsiballZ_systemd.py'
Dec 07 09:59:37 compute-1 sudo[208203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:37 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e6c004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:38 compute-1 ceph-mon[80077]: pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:59:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:59:38 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 09:59:38 compute-1 python3.9[208205]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:59:38 compute-1 systemd[1]: Reloading.
Dec 07 09:59:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:38.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:38 compute-1 systemd-rc-local-generator[208235]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:59:38 compute-1 systemd-sysv-generator[208238]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:59:38 compute-1 sudo[208203]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:59:38.631 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 09:59:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:59:38.633 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 09:59:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 09:59:38.633 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 09:59:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:38 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e640039a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:39 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:59:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:39 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:59:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:39 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:39 compute-1 sudo[208395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzpemexotbuqswebmusoiygfzkjotmvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101579.0393226-1147-114294396858874/AnsiballZ_stat.py'
Dec 07 09:59:39 compute-1 sudo[208395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:39.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:39 compute-1 python3.9[208397]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:59:39 compute-1 sudo[208395]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:39 compute-1 sudo[208496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvkoekesvvejnmxpegugwalfkpewayru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101579.0393226-1147-114294396858874/AnsiballZ_file.py'
Dec 07 09:59:39 compute-1 sudo[208496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:39 compute-1 sudo[208447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 09:59:39 compute-1 sudo[208447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 09:59:39 compute-1 sudo[208447]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:39 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e54000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:40 compute-1 python3.9[208499]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:40 compute-1 sudo[208496]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:59:40 compute-1 ceph-mon[80077]: pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 07 09:59:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:59:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:40.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:59:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:40 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580036e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:40 compute-1 sudo[208651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnlomfuqyloljczydqtfkejwphdhgibs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101580.442307-1182-269829054699520/AnsiballZ_stat.py'
Dec 07 09:59:40 compute-1 sudo[208651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:40 compute-1 python3.9[208653]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:59:41 compute-1 sudo[208651]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:41 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580036e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:41.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:41 compute-1 sudo[208729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efmccfijdzjtvikhehvxaswiluovubcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101580.442307-1182-269829054699520/AnsiballZ_file.py'
Dec 07 09:59:41 compute-1 sudo[208729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:41 compute-1 python3.9[208731]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:41 compute-1 sudo[208729]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:41 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:42 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:59:42 compute-1 ceph-mon[80077]: pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Dec 07 09:59:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:42.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:42 compute-1 sudo[208882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kufyfukwaomhyszqeoqpnywwjfwvdnak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101582.0323863-1219-157778880118758/AnsiballZ_systemd.py'
Dec 07 09:59:42 compute-1 sudo[208882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:42 compute-1 podman[208885]: 2025-12-07 09:59:42.581867335 +0000 UTC m=+0.072775361 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 07 09:59:42 compute-1 python3.9[208884]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 09:59:42 compute-1 systemd[1]: Reloading.
Dec 07 09:59:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:42 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e540016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:42 compute-1 systemd-rc-local-generator[208923]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 09:59:42 compute-1 systemd-sysv-generator[208929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 09:59:43 compute-1 systemd[1]: Starting Create netns directory...
Dec 07 09:59:43 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 07 09:59:43 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 07 09:59:43 compute-1 systemd[1]: Finished Create netns directory.
Dec 07 09:59:43 compute-1 sudo[208882]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:59:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:43 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e540016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:43.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:43 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e640039a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:44 compute-1 sudo[209095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frbnozcsvdnpzeaubfseaqhfgqbuogjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101583.7074432-1248-26132631390700/AnsiballZ_file.py'
Dec 07 09:59:44 compute-1 sudo[209095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:44 compute-1 ceph-mon[80077]: pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 767 B/s wr, 2 op/s
Dec 07 09:59:44 compute-1 python3.9[209097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:59:44 compute-1 sudo[209095]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:44.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:44 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:44 compute-1 sudo[209248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sejuzkexwewbhxscafdesxgkbxzbcupg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101584.5605588-1272-210970375987985/AnsiballZ_stat.py'
Dec 07 09:59:44 compute-1 sudo[209248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:45 compute-1 python3.9[209250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:59:45 compute-1 sudo[209248]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:59:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:45 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 09:59:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:45 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580036e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:45.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:45 compute-1 sudo[209371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cumvnjnutphbsqjmpcjcesxujwivnlhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101584.5605588-1272-210970375987985/AnsiballZ_copy.py'
Dec 07 09:59:45 compute-1 sudo[209371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:45 compute-1 python3.9[209373]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101584.5605588-1272-210970375987985/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:59:45 compute-1 sudo[209371]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:45 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e540016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:46 compute-1 ceph-mon[80077]: pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:59:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:46.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:46 compute-1 sudo[209524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpmgjvqtmigaqkrbnhiwdhnktzwsoopt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101586.3745632-1323-91683779808606/AnsiballZ_file.py'
Dec 07 09:59:46 compute-1 sudo[209524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:46 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e640039a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:46 compute-1 python3.9[209526]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 09:59:46 compute-1 sudo[209524]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:47 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:47 compute-1 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 07 09:59:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:47.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:47 compute-1 sudo[209677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvihokuqmbzdinkwrlysdrxtgzxgcbpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101587.2409396-1347-80755944020716/AnsiballZ_stat.py'
Dec 07 09:59:47 compute-1 sudo[209677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095947 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:59:47 compute-1 python3.9[209679]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 09:59:47 compute-1 sudo[209677]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:47 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580036e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:48 compute-1 sudo[209800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-letwqhjtbfoknpnkapjofyzhwrfbrbpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101587.2409396-1347-80755944020716/AnsiballZ_copy.py'
Dec 07 09:59:48 compute-1 sudo[209800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:48 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 09:59:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:48 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 09:59:48 compute-1 ceph-mon[80077]: pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 09:59:48 compute-1 python3.9[209803]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101587.2409396-1347-80755944020716/.source.json _original_basename=.het85req follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:59:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:48.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:59:48 compute-1 sudo[209800]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:48 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580036e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:48 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 07 09:59:49 compute-1 sudo[209954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agxftqcunroljofrwbfkqrqoscmreuak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101588.6934154-1392-41944502702903/AnsiballZ_file.py'
Dec 07 09:59:49 compute-1 sudo[209954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:49 compute-1 python3.9[209956]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 09:59:49 compute-1 sudo[209954]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:49 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e640039a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:49.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:49 compute-1 sudo[210107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyrvtlxydfnuifkecvxflfslavpletef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101589.5575411-1416-257153609322684/AnsiballZ_stat.py'
Dec 07 09:59:49 compute-1 sudo[210107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:49 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:50 compute-1 sudo[210107]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:59:50 compute-1 ceph-mon[80077]: pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 09:59:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:50.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:50 compute-1 sudo[210231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjvnwirycarqkesyupqjzukhupjcfnwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101589.5575411-1416-257153609322684/AnsiballZ_copy.py'
Dec 07 09:59:50 compute-1 sudo[210231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:50 compute-1 sudo[210231]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:50 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e60000f30 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:51 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 09:59:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:51 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580036e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:59:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:51.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:59:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:51 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e640039a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:52 compute-1 ceph-mon[80077]: pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Dec 07 09:59:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:52.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:52 compute-1 sudo[210384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrccgfwvbsaefrgaujafgereordkhbed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101592.1186297-1467-148588403825714/AnsiballZ_container_config_data.py'
Dec 07 09:59:52 compute-1 sudo[210384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:52 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:52 compute-1 python3.9[210386]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 07 09:59:52 compute-1 sudo[210384]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:53 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e600010d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:59:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:53.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:59:53 compute-1 sudo[210536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzqprqkuuvpivlzfdexqvusfsiqhonhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101593.199635-1494-57408782568163/AnsiballZ_container_config_hash.py'
Dec 07 09:59:53 compute-1 sudo[210536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:53 compute-1 python3.9[210538]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 07 09:59:53 compute-1 sudo[210536]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:53 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e580036e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:54 compute-1 ceph-mon[80077]: pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:59:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:54.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/095954 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 09:59:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:54 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e640039a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:54 compute-1 sudo[210689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnkrtaomakpxajrxycrwzrulqkwyaoky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101594.3039584-1521-219845062340804/AnsiballZ_podman_container_info.py'
Dec 07 09:59:54 compute-1 sudo[210689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:55 compute-1 python3.9[210691]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 07 09:59:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 09:59:55 compute-1 sudo[210689]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:55 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e640039a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:55.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:55 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e60001e80 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:59:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:56.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:59:56 compute-1 ceph-mon[80077]: pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 09:59:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:56 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e58003700 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:57 compute-1 sudo[210869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gndhohucukzfaclooycslvgsuieakkhh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765101596.4689238-1560-117734868616857/AnsiballZ_edpm_container_manage.py'
Dec 07 09:59:57 compute-1 sudo[210869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:57 compute-1 python3[210871]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 07 09:59:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:57 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:57.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:57 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e640039a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 09:59:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:09:59:58.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 09:59:58 compute-1 ceph-mon[80077]: pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Dec 07 09:59:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 09:59:58 compute-1 podman[210886]: 2025-12-07 09:59:58.481342296 +0000 UTC m=+1.055869737 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec 07 09:59:58 compute-1 podman[210945]: 2025-12-07 09:59:58.62941443 +0000 UTC m=+0.059915363 container create 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec 07 09:59:58 compute-1 podman[210945]: 2025-12-07 09:59:58.599566857 +0000 UTC m=+0.030067810 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec 07 09:59:58 compute-1 python3[210871]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842
Dec 07 09:59:58 compute-1 sudo[210869]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:58 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e640039a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:59 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e58003720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 09:59:59 compute-1 sudo[211131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fejhaajbraxfegenhcxbdevymdbujmna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101599.147729-1584-76811287694460/AnsiballZ_stat.py'
Dec 07 09:59:59 compute-1 sudo[211131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 09:59:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 09:59:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 09:59:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:09:59:59.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 09:59:59 compute-1 python3.9[211133]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 09:59:59 compute-1 sudo[211131]: pam_unix(sudo:session): session closed for user root
Dec 07 09:59:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 09:59:59 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:00 compute-1 sudo[211160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:00:00 compute-1 sudo[211160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:00:00 compute-1 sudo[211160]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:00:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:00.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:00 compute-1 ceph-mon[80077]: pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Dec 07 10:00:00 compute-1 ceph-mon[80077]: overall HEALTH_OK
Dec 07 10:00:00 compute-1 sudo[211311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkaturcvwaottwbvpkkzxmrddjeglwru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101600.108612-1611-57922301088568/AnsiballZ_file.py'
Dec 07 10:00:00 compute-1 sudo[211311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:00 compute-1 python3.9[211313]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:00 compute-1 sudo[211311]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 10:00:00 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:00 compute-1 sudo[211387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbzknwycguyixvszwvegvgnehclauwvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101600.108612-1611-57922301088568/AnsiballZ_stat.py'
Dec 07 10:00:00 compute-1 sudo[211387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:01 compute-1 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 07 10:00:01 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 07 10:00:01 compute-1 python3.9[211389]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 10:00:01 compute-1 sudo[211387]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[201876]: 07/12/2025 10:00:01 : epoch 69354fdf : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9e780032e0 fd 47 proxy ignored for local
Dec 07 10:00:01 compute-1 kernel: ganesha.nfsd[203475]: segfault at 50 ip 00007f9f354d732e sp 00007f9eedffa210 error 4 in libntirpc.so.5.8[7f9f354bc000+2c000] likely on CPU 5 (core 0, socket 5)
Dec 07 10:00:01 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 10:00:01 compute-1 systemd[1]: Started Process Core Dump (PID 211415/UID 0).
Dec 07 10:00:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:01.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:01 compute-1 sudo[211542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfkkelingsodvahodqynjahmexkhwfwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101601.3805182-1611-94282939042905/AnsiballZ_copy.py'
Dec 07 10:00:01 compute-1 sudo[211542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:02 compute-1 python3.9[211544]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765101601.3805182-1611-94282939042905/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:02 compute-1 sudo[211542]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:02 compute-1 sudo[211619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imsioumxyvjfhsnkosgclsgpdqfgzvlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101601.3805182-1611-94282939042905/AnsiballZ_systemd.py'
Dec 07 10:00:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:02.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:02 compute-1 sudo[211619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:02 compute-1 ceph-mon[80077]: pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Dec 07 10:00:02 compute-1 systemd-coredump[211416]: Process 201880 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f9f354d732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 10:00:02 compute-1 python3.9[211621]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 07 10:00:02 compute-1 systemd[1]: Reloading.
Dec 07 10:00:02 compute-1 systemd-rc-local-generator[211651]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 10:00:02 compute-1 systemd-sysv-generator[211655]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 10:00:02 compute-1 podman[211625]: 2025-12-07 10:00:02.749773161 +0000 UTC m=+0.046656292 container died 0ae6a0223306e17d3258772ea8eac2cd3c3469724f0f54488336428d40c51fd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 07 10:00:02 compute-1 podman[211625]: 2025-12-07 10:00:02.79305775 +0000 UTC m=+0.089940871 container remove 0ae6a0223306e17d3258772ea8eac2cd3c3469724f0f54488336428d40c51fd5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 10:00:02 compute-1 systemd[1]: var-lib-containers-storage-overlay-a841881b1b7b3723f1b7aee828a2151cdbfe83d070478a96889de21a60bebb99-merged.mount: Deactivated successfully.
Dec 07 10:00:02 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 10:00:02 compute-1 systemd[1]: systemd-coredump@7-211415-0.service: Deactivated successfully.
Dec 07 10:00:02 compute-1 systemd[1]: systemd-coredump@7-211415-0.service: Consumed 1.179s CPU time.
Dec 07 10:00:02 compute-1 sudo[211619]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:03 compute-1 podman[211677]: 2025-12-07 10:00:03.068533646 +0000 UTC m=+0.095249886 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:00:03 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 10:00:03 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.504s CPU time.
Dec 07 10:00:03 compute-1 sudo[211800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztvpjqgupveqbxcxpljmokwqdeujhtoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101601.3805182-1611-94282939042905/AnsiballZ_systemd.py'
Dec 07 10:00:03 compute-1 sudo[211800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:00:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:03.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:00:03 compute-1 python3.9[211802]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 10:00:03 compute-1 systemd[1]: Reloading.
Dec 07 10:00:03 compute-1 systemd-rc-local-generator[211831]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 10:00:03 compute-1 systemd-sysv-generator[211835]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 10:00:03 compute-1 systemd[1]: Starting multipathd container...
Dec 07 10:00:03 compute-1 systemd[1]: Started libcrun container.
Dec 07 10:00:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/830fb8f2518913437c1fc3c3c1578a276f70150a8577c26544cb7c8c6a4b2cf8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 07 10:00:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/830fb8f2518913437c1fc3c3c1578a276f70150a8577c26544cb7c8c6a4b2cf8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 07 10:00:04 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193.
Dec 07 10:00:04 compute-1 podman[211842]: 2025-12-07 10:00:04.025590189 +0000 UTC m=+0.118653614 container init 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:00:04 compute-1 multipathd[211858]: + sudo -E kolla_set_configs
Dec 07 10:00:04 compute-1 podman[211842]: 2025-12-07 10:00:04.052556834 +0000 UTC m=+0.145620259 container start 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:00:04 compute-1 podman[211842]: multipathd
Dec 07 10:00:04 compute-1 sudo[211864]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 07 10:00:04 compute-1 sudo[211864]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 07 10:00:04 compute-1 sudo[211864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 07 10:00:04 compute-1 systemd[1]: Started multipathd container.
Dec 07 10:00:04 compute-1 sudo[211800]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:04 compute-1 multipathd[211858]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 07 10:00:04 compute-1 multipathd[211858]: INFO:__main__:Validating config file
Dec 07 10:00:04 compute-1 multipathd[211858]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 07 10:00:04 compute-1 multipathd[211858]: INFO:__main__:Writing out command to execute
Dec 07 10:00:04 compute-1 sudo[211864]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:04 compute-1 multipathd[211858]: ++ cat /run_command
Dec 07 10:00:04 compute-1 multipathd[211858]: + CMD='/usr/sbin/multipathd -d'
Dec 07 10:00:04 compute-1 multipathd[211858]: + ARGS=
Dec 07 10:00:04 compute-1 multipathd[211858]: + sudo kolla_copy_cacerts
Dec 07 10:00:04 compute-1 podman[211865]: 2025-12-07 10:00:04.122757096 +0000 UTC m=+0.057241460 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 07 10:00:04 compute-1 sudo[211887]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 07 10:00:04 compute-1 sudo[211887]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 07 10:00:04 compute-1 sudo[211887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 07 10:00:04 compute-1 systemd[1]: 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193-16d13ecc8098412d.service: Main process exited, code=exited, status=1/FAILURE
Dec 07 10:00:04 compute-1 systemd[1]: 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193-16d13ecc8098412d.service: Failed with result 'exit-code'.
Dec 07 10:00:04 compute-1 sudo[211887]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:04 compute-1 multipathd[211858]: Running command: '/usr/sbin/multipathd -d'
Dec 07 10:00:04 compute-1 multipathd[211858]: + [[ ! -n '' ]]
Dec 07 10:00:04 compute-1 multipathd[211858]: + . kolla_extend_start
Dec 07 10:00:04 compute-1 multipathd[211858]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 07 10:00:04 compute-1 multipathd[211858]: + umask 0022
Dec 07 10:00:04 compute-1 multipathd[211858]: + exec /usr/sbin/multipathd -d
Dec 07 10:00:04 compute-1 multipathd[211858]: 3564.795615 | --------start up--------
Dec 07 10:00:04 compute-1 multipathd[211858]: 3564.795634 | read /etc/multipath.conf
Dec 07 10:00:04 compute-1 multipathd[211858]: 3564.801158 | path checkers start up
Dec 07 10:00:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:04.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:04 compute-1 ceph-mon[80077]: pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Dec 07 10:00:05 compute-1 python3.9[212046]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 10:00:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:00:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:00:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:05.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:00:05 compute-1 sudo[212198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgvoizcgaoebcspryiudsfpyczxevftg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101605.476958-1719-130182304009866/AnsiballZ_command.py'
Dec 07 10:00:05 compute-1 sudo[212198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:06 compute-1 python3.9[212200]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 10:00:06 compute-1 sudo[212198]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:06.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:06 compute-1 ceph-mon[80077]: pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 07 10:00:06 compute-1 sudo[212364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdkaepcouepfvteyofxpkkiejfwfvkdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101606.5947485-1743-145569396069782/AnsiballZ_systemd.py'
Dec 07 10:00:06 compute-1 sudo[212364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:07 compute-1 python3.9[212366]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 10:00:07 compute-1 systemd[1]: Stopping multipathd container...
Dec 07 10:00:07 compute-1 multipathd[211858]: 3567.920949 | exit (signal)
Dec 07 10:00:07 compute-1 multipathd[211858]: 3567.921435 | --------shut down-------
Dec 07 10:00:07 compute-1 systemd[1]: libpod-0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193.scope: Deactivated successfully.
Dec 07 10:00:07 compute-1 podman[212370]: 2025-12-07 10:00:07.30316012 +0000 UTC m=+0.068630570 container died 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:00:07 compute-1 systemd[1]: 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193-16d13ecc8098412d.timer: Deactivated successfully.
Dec 07 10:00:07 compute-1 systemd[1]: Stopped /usr/bin/podman healthcheck run 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193.
Dec 07 10:00:07 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193-userdata-shm.mount: Deactivated successfully.
Dec 07 10:00:07 compute-1 systemd[1]: var-lib-containers-storage-overlay-830fb8f2518913437c1fc3c3c1578a276f70150a8577c26544cb7c8c6a4b2cf8-merged.mount: Deactivated successfully.
Dec 07 10:00:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100007 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:00:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:00:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:07.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:00:07 compute-1 podman[212370]: 2025-12-07 10:00:07.642583888 +0000 UTC m=+0.408054358 container cleanup 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:00:07 compute-1 podman[212370]: multipathd
Dec 07 10:00:07 compute-1 ceph-mon[80077]: pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:00:07 compute-1 podman[212396]: multipathd
Dec 07 10:00:07 compute-1 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 07 10:00:07 compute-1 systemd[1]: Stopped multipathd container.
Dec 07 10:00:07 compute-1 systemd[1]: Starting multipathd container...
Dec 07 10:00:07 compute-1 systemd[1]: Started libcrun container.
Dec 07 10:00:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/830fb8f2518913437c1fc3c3c1578a276f70150a8577c26544cb7c8c6a4b2cf8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 07 10:00:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/830fb8f2518913437c1fc3c3c1578a276f70150a8577c26544cb7c8c6a4b2cf8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 07 10:00:07 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193.
Dec 07 10:00:07 compute-1 podman[212409]: 2025-12-07 10:00:07.849334319 +0000 UTC m=+0.112003272 container init 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:00:07 compute-1 multipathd[212426]: + sudo -E kolla_set_configs
Dec 07 10:00:07 compute-1 podman[212409]: 2025-12-07 10:00:07.880007535 +0000 UTC m=+0.142676468 container start 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Dec 07 10:00:07 compute-1 podman[212409]: multipathd
Dec 07 10:00:07 compute-1 sudo[212432]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 07 10:00:07 compute-1 sudo[212432]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 07 10:00:07 compute-1 sudo[212432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 07 10:00:07 compute-1 systemd[1]: Started multipathd container.
Dec 07 10:00:07 compute-1 sudo[212364]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:07 compute-1 multipathd[212426]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 07 10:00:07 compute-1 multipathd[212426]: INFO:__main__:Validating config file
Dec 07 10:00:07 compute-1 multipathd[212426]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 07 10:00:07 compute-1 multipathd[212426]: INFO:__main__:Writing out command to execute
Dec 07 10:00:07 compute-1 sudo[212432]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:07 compute-1 multipathd[212426]: ++ cat /run_command
Dec 07 10:00:07 compute-1 multipathd[212426]: + CMD='/usr/sbin/multipathd -d'
Dec 07 10:00:07 compute-1 multipathd[212426]: + ARGS=
Dec 07 10:00:07 compute-1 multipathd[212426]: + sudo kolla_copy_cacerts
Dec 07 10:00:07 compute-1 sudo[212453]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 07 10:00:07 compute-1 sudo[212453]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 07 10:00:07 compute-1 sudo[212453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 07 10:00:07 compute-1 sudo[212453]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:07 compute-1 multipathd[212426]: + [[ ! -n '' ]]
Dec 07 10:00:07 compute-1 multipathd[212426]: + . kolla_extend_start
Dec 07 10:00:07 compute-1 multipathd[212426]: Running command: '/usr/sbin/multipathd -d'
Dec 07 10:00:07 compute-1 multipathd[212426]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 07 10:00:07 compute-1 multipathd[212426]: + umask 0022
Dec 07 10:00:07 compute-1 multipathd[212426]: + exec /usr/sbin/multipathd -d
Dec 07 10:00:07 compute-1 multipathd[212426]: 3568.637601 | --------start up--------
Dec 07 10:00:07 compute-1 multipathd[212426]: 3568.637623 | read /etc/multipath.conf
Dec 07 10:00:07 compute-1 podman[212433]: 2025-12-07 10:00:07.98956941 +0000 UTC m=+0.093868308 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 07 10:00:07 compute-1 multipathd[212426]: 3568.645424 | path checkers start up
Dec 07 10:00:07 compute-1 systemd[1]: 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193-1d6d6bcd1f32aed.service: Main process exited, code=exited, status=1/FAILURE
Dec 07 10:00:07 compute-1 systemd[1]: 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193-1d6d6bcd1f32aed.service: Failed with result 'exit-code'.
Dec 07 10:00:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:08.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:08 compute-1 sudo[212616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pboaxfjzjeltvrzlrcqfsktakedmjyvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101608.2205145-1767-132084548704932/AnsiballZ_file.py'
Dec 07 10:00:08 compute-1 sudo[212616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:08 compute-1 python3.9[212618]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:08 compute-1 sudo[212616]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:09.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:09 compute-1 sudo[212768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlemonsywyqaykwduvevssgncxrrperp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101609.3850207-1803-276746599855597/AnsiballZ_file.py'
Dec 07 10:00:09 compute-1 sudo[212768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:09 compute-1 python3.9[212770]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 07 10:00:09 compute-1 sudo[212768]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:09 compute-1 ceph-mon[80077]: pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:00:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:00:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:00:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:10.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:00:10 compute-1 sudo[212921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eojecxotllhptyvcygromfiodfkpylza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101610.226551-1827-269611426701701/AnsiballZ_modprobe.py'
Dec 07 10:00:10 compute-1 sudo[212921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:10 compute-1 python3.9[212923]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 07 10:00:10 compute-1 kernel: Key type psk registered
Dec 07 10:00:11 compute-1 sudo[212921]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:11.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:11 compute-1 sudo[213083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxxhqypoldqrmjvbtxcjkqfvcmnpttfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101611.3106735-1851-82249649802419/AnsiballZ_stat.py'
Dec 07 10:00:11 compute-1 sudo[213083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:11 compute-1 python3.9[213085]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 10:00:11 compute-1 sudo[213083]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:11 compute-1 ceph-mon[80077]: pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:00:12 compute-1 sudo[213207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iokrbodwwuerxdsqvveafbqqwujrroiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101611.3106735-1851-82249649802419/AnsiballZ_copy.py'
Dec 07 10:00:12 compute-1 sudo[213207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:12.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:12 compute-1 python3.9[213209]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765101611.3106735-1851-82249649802419/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:12 compute-1 sudo[213207]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:00:13 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 8.
Dec 07 10:00:13 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:00:13 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.504s CPU time.
Dec 07 10:00:13 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 10:00:13 compute-1 sudo[213381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phvvdndlkgpefjaswgkdooucoagpajrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101612.8542986-1899-117355707603343/AnsiballZ_lineinfile.py'
Dec 07 10:00:13 compute-1 sudo[213381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:13 compute-1 podman[213333]: 2025-12-07 10:00:13.194962403 +0000 UTC m=+0.066689748 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:00:13 compute-1 podman[213421]: 2025-12-07 10:00:13.330444153 +0000 UTC m=+0.042454687 container create e0324670f6233a1eb869e8cd6b91542d641a2d91483b1107d804be9fb2789e0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 10:00:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7418ab22d593b3608e60b0f5bb9995e5ef53019d842b92d67f6db69d5ac34580/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 10:00:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7418ab22d593b3608e60b0f5bb9995e5ef53019d842b92d67f6db69d5ac34580/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 10:00:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7418ab22d593b3608e60b0f5bb9995e5ef53019d842b92d67f6db69d5ac34580/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:00:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7418ab22d593b3608e60b0f5bb9995e5ef53019d842b92d67f6db69d5ac34580/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:00:13 compute-1 python3.9[213394]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:13 compute-1 podman[213421]: 2025-12-07 10:00:13.382181702 +0000 UTC m=+0.094192256 container init e0324670f6233a1eb869e8cd6b91542d641a2d91483b1107d804be9fb2789e0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 10:00:13 compute-1 podman[213421]: 2025-12-07 10:00:13.387925349 +0000 UTC m=+0.099935883 container start e0324670f6233a1eb869e8cd6b91542d641a2d91483b1107d804be9fb2789e0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 10:00:13 compute-1 bash[213421]: e0324670f6233a1eb869e8cd6b91542d641a2d91483b1107d804be9fb2789e0a
Dec 07 10:00:13 compute-1 podman[213421]: 2025-12-07 10:00:13.312989278 +0000 UTC m=+0.024999832 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 10:00:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:13 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 10:00:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:13 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 10:00:13 compute-1 sudo[213381]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:13 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:00:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:13 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 10:00:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:13 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 10:00:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:13 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 10:00:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:13 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 10:00:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:13 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 10:00:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:13 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:00:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:13.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:13 compute-1 sudo[213627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlqwxtmjxclbvxyytbkmhwgfftxxvssr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101613.5995054-1924-17849240185089/AnsiballZ_systemd.py'
Dec 07 10:00:13 compute-1 sudo[213627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:13 compute-1 ceph-mon[80077]: pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:00:14 compute-1 python3.9[213629]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 10:00:14 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 07 10:00:14 compute-1 systemd[1]: Stopped Load Kernel Modules.
Dec 07 10:00:14 compute-1 systemd[1]: Stopping Load Kernel Modules...
Dec 07 10:00:14 compute-1 systemd[1]: Starting Load Kernel Modules...
Dec 07 10:00:14 compute-1 systemd[1]: Finished Load Kernel Modules.
Dec 07 10:00:14 compute-1 sudo[213627]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:00:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:14.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:00:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:00:15 compute-1 sudo[213784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiyzginbeogyqcclhbnibirgijqymwpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101614.7010233-1947-209952837018859/AnsiballZ_dnf.py'
Dec 07 10:00:15 compute-1 sudo[213784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:15 compute-1 python3.9[213786]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 07 10:00:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:15.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:15 compute-1 ceph-mon[80077]: pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:00:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:00:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:16.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:00:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:17.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:17 compute-1 sshd-session[213791]: Invalid user user from 104.248.193.130 port 50022
Dec 07 10:00:17 compute-1 sshd-session[213791]: Connection closed by invalid user user 104.248.193.130 port 50022 [preauth]
Dec 07 10:00:17 compute-1 ceph-mon[80077]: pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:00:18 compute-1 systemd[1]: Reloading.
Dec 07 10:00:18 compute-1 systemd-sysv-generator[213824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 10:00:18 compute-1 systemd-rc-local-generator[213818]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 10:00:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:00:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:18.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:00:18 compute-1 systemd[1]: Reloading.
Dec 07 10:00:18 compute-1 systemd-rc-local-generator[213858]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 10:00:18 compute-1 systemd-sysv-generator[213862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 10:00:18 compute-1 systemd-logind[796]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 07 10:00:18 compute-1 systemd-logind[796]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 07 10:00:18 compute-1 lvm[213905]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 07 10:00:18 compute-1 lvm[213905]: VG ceph_vg0 finished
Dec 07 10:00:19 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 07 10:00:19 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 07 10:00:19 compute-1 systemd[1]: Reloading.
Dec 07 10:00:19 compute-1 systemd-sysv-generator[213958]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 10:00:19 compute-1 systemd-rc-local-generator[213955]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 10:00:19 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 07 10:00:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:19 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:00:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:19 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:00:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:19.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:19 compute-1 sudo[213784]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:19 compute-1 ceph-mon[80077]: pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:00:20 compute-1 sudo[214945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:00:20 compute-1 sudo[214945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:00:20 compute-1 sudo[214945]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:00:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:20.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:20 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 07 10:00:20 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 07 10:00:20 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.435s CPU time.
Dec 07 10:00:20 compute-1 systemd[1]: run-r085303c3eff742089a80c38db70318ef.service: Deactivated successfully.
Dec 07 10:00:20 compute-1 sudo[215270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lskqzklawhwbfmaneoxuuxqxwbkzdqof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101620.1486864-1971-80504789371011/AnsiballZ_systemd_service.py'
Dec 07 10:00:20 compute-1 sudo[215270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:20 compute-1 python3.9[215272]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 10:00:20 compute-1 systemd[1]: Stopping Open-iSCSI...
Dec 07 10:00:20 compute-1 iscsid[203179]: iscsid shutting down.
Dec 07 10:00:20 compute-1 systemd[1]: iscsid.service: Deactivated successfully.
Dec 07 10:00:20 compute-1 systemd[1]: Stopped Open-iSCSI.
Dec 07 10:00:20 compute-1 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 07 10:00:20 compute-1 systemd[1]: Starting Open-iSCSI...
Dec 07 10:00:20 compute-1 systemd[1]: Started Open-iSCSI.
Dec 07 10:00:20 compute-1 sudo[215270]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:00:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:21.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:00:21 compute-1 python3.9[215427]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 07 10:00:22 compute-1 ceph-mon[80077]: pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:00:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:22.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:22 compute-1 sudo[215582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oulcogihydjfwyycizgptlmswvnwclxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101622.3118062-2023-1225123828518/AnsiballZ_file.py'
Dec 07 10:00:22 compute-1 sudo[215582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:22 compute-1 python3.9[215584]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:22 compute-1 sudo[215582]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:23.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:23 compute-1 sudo[215734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezxinbpdrpgeovxodojwjdtfiglrlchy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101623.433082-2056-62555763529025/AnsiballZ_systemd_service.py'
Dec 07 10:00:23 compute-1 sudo[215734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:23 compute-1 python3.9[215736]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 07 10:00:23 compute-1 systemd[1]: Reloading.
Dec 07 10:00:24 compute-1 ceph-mon[80077]: pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:00:24 compute-1 systemd-rc-local-generator[215762]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 10:00:24 compute-1 systemd-sysv-generator[215765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 10:00:24 compute-1 sudo[215734]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:24.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:00:25 compute-1 python3.9[215921]: ansible-ansible.builtin.service_facts Invoked
Dec 07 10:00:25 compute-1 network[215938]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 07 10:00:25 compute-1 network[215939]: 'network-scripts' will be removed from distribution in near future.
Dec 07 10:00:25 compute-1 network[215940]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 07 10:00:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:25.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 10:00:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:25 : epoch 6935502d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:00:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:26 : epoch 6935502d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5b4c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:26 compute-1 ceph-mon[80077]: pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:00:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:26.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:26 : epoch 6935502d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5b380016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:27 : epoch 6935502d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5b20000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:27.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:28 : epoch 6935502d : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5b1c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:28 compute-1 ceph-mon[80077]: pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 10:00:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:00:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:28.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:28 : epoch 6935502d : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5b44001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100029 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:00:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:29 : epoch 6935502d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5b38002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:00:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:29.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:00:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:30 : epoch 6935502d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5b200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:30 compute-1 ceph-mon[80077]: pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 10:00:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:00:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:30.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:30 : epoch 6935502d : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5b200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:30 compute-1 sudo[216232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkqprzivvoprmokgludwmitjcwomryxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101630.5192573-2113-138476169254176/AnsiballZ_systemd_service.py'
Dec 07 10:00:30 compute-1 sudo[216232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:31 compute-1 python3.9[216234]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 10:00:31 compute-1 sudo[216232]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:31 : epoch 6935502d : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5b440025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:31.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:31 compute-1 sudo[216385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjzrzhtvlawopmdftqzancfgpuqjqhzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101631.3149433-2113-211972518023366/AnsiballZ_systemd_service.py'
Dec 07 10:00:31 compute-1 sudo[216385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:31 compute-1 python3.9[216387]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 10:00:31 compute-1 sudo[216385]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:32 : epoch 6935502d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5b38002000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:32 compute-1 ceph-mon[80077]: pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:00:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:32.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:32 compute-1 sudo[216539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nenvxgoblylvrvgbiwuiyvndgiaeajuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101632.121205-2113-271418763825177/AnsiballZ_systemd_service.py'
Dec 07 10:00:32 compute-1 sudo[216539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:32 compute-1 python3.9[216541]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 10:00:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:32 : epoch 6935502d : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5b200016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:32 compute-1 sudo[216539]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[213436]: 07/12/2025 10:00:33 : epoch 6935502d : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5b200016a0 fd 38 proxy ignored for local
Dec 07 10:00:33 compute-1 kernel: ganesha.nfsd[215960]: segfault at 50 ip 00007f5bf9f1d32e sp 00007f5bb67fb210 error 4 in libntirpc.so.5.8[7f5bf9f02000+2c000] likely on CPU 7 (core 0, socket 7)
Dec 07 10:00:33 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 10:00:33 compute-1 systemd[1]: Started Process Core Dump (PID 216666/UID 0).
Dec 07 10:00:33 compute-1 sudo[216705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siiunzagewexidlkmocmhaaobzsngvvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101633.0160291-2113-244556781635715/AnsiballZ_systemd_service.py'
Dec 07 10:00:33 compute-1 sudo[216705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:33.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:33 compute-1 podman[216667]: 2025-12-07 10:00:33.567850698 +0000 UTC m=+0.117130122 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:00:33 compute-1 python3.9[216709]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 10:00:33 compute-1 sudo[216705]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:34 compute-1 ceph-mon[80077]: pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:00:34 compute-1 sudo[216872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgjhbayuioefppjahxnxgdnsiqwaooyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101633.9647892-2113-152468279651605/AnsiballZ_systemd_service.py'
Dec 07 10:00:34 compute-1 sudo[216872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:34.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:34 compute-1 python3.9[216874]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 10:00:34 compute-1 sudo[216872]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:34 compute-1 systemd-coredump[216668]: Process 213440 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007f5bf9f1d32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 10:00:34 compute-1 systemd[1]: systemd-coredump@8-216666-0.service: Deactivated successfully.
Dec 07 10:00:34 compute-1 systemd[1]: systemd-coredump@8-216666-0.service: Consumed 1.279s CPU time.
Dec 07 10:00:34 compute-1 podman[216932]: 2025-12-07 10:00:34.831161295 +0000 UTC m=+0.034242544 container died e0324670f6233a1eb869e8cd6b91542d641a2d91483b1107d804be9fb2789e0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 07 10:00:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-7418ab22d593b3608e60b0f5bb9995e5ef53019d842b92d67f6db69d5ac34580-merged.mount: Deactivated successfully.
Dec 07 10:00:34 compute-1 podman[216932]: 2025-12-07 10:00:34.869236313 +0000 UTC m=+0.072317542 container remove e0324670f6233a1eb869e8cd6b91542d641a2d91483b1107d804be9fb2789e0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec 07 10:00:34 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 10:00:35 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 10:00:35 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.418s CPU time.
Dec 07 10:00:35 compute-1 sudo[217070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujowkhrypaghsjngptpzgiiuvmtyjjwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101634.759351-2113-188668163293746/AnsiballZ_systemd_service.py'
Dec 07 10:00:35 compute-1 sudo[217070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:00:35 compute-1 python3.9[217072]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 10:00:35 compute-1 sudo[217070]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:35.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:35 compute-1 sudo[217223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lohwpiyqsnfrljazitcvucwdsghuvnmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101635.6071754-2113-88029673787086/AnsiballZ_systemd_service.py'
Dec 07 10:00:35 compute-1 sudo[217223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:36 compute-1 python3.9[217225]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 10:00:36 compute-1 sudo[217223]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:36 compute-1 ceph-mon[80077]: pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:00:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:36.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:36 compute-1 sudo[217377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmxuqkpgpxhqnvbbqvlhbittkaabwnim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101636.3768704-2113-138574067197491/AnsiballZ_systemd_service.py'
Dec 07 10:00:36 compute-1 sudo[217377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:36 compute-1 python3.9[217379]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 10:00:37 compute-1 sudo[217377]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:37.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:38 compute-1 sudo[217435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:00:38 compute-1 sudo[217435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:00:38 compute-1 sudo[217435]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:38 compute-1 ceph-mon[80077]: pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 10:00:38 compute-1 sudo[217505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 07 10:00:38 compute-1 sudo[217505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:00:38 compute-1 podman[217482]: 2025-12-07 10:00:38.320434324 +0000 UTC m=+0.081846810 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible)
Dec 07 10:00:38 compute-1 sudo[217601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxhmfknakbvtgiqvejdmydmkqablveae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101638.0896885-2290-201131367416619/AnsiballZ_file.py'
Dec 07 10:00:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:38.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:38 compute-1 sudo[217601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:38 compute-1 python3.9[217603]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:38 compute-1 sudo[217601]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:00:38.632 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:00:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:00:38.633 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:00:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:00:38.633 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:00:38 compute-1 sudo[217505]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:38 compute-1 sudo[217650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:00:38 compute-1 sudo[217650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:00:38 compute-1 sudo[217650]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:38 compute-1 sudo[217703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:00:38 compute-1 sudo[217703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:00:39 compute-1 sudo[217837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqsqkqrxgcufzexsfpfbtbodjnifaxuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101638.7440352-2290-92798364105594/AnsiballZ_file.py'
Dec 07 10:00:39 compute-1 sudo[217837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:39 compute-1 python3.9[217841]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:39 compute-1 sudo[217837]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:39 compute-1 sudo[217703]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:39 compute-1 ceph-osd[77581]: bluestore.MempoolThread fragmentation_score=0.000032 took=0.000055s
Dec 07 10:00:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100039 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:00:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:39.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:00:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:00:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:00:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:00:39 compute-1 ceph-mon[80077]: pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 10:00:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:00:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:00:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:00:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:00:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:00:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:00:39 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:00:39 compute-1 sudo[218008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkrtckfqmrrphkgcfosgptgadlzvarac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101639.4258413-2290-204249298978210/AnsiballZ_file.py'
Dec 07 10:00:39 compute-1 sudo[218008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:39 compute-1 python3.9[218010]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:39 compute-1 sudo[218008]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:00:40 compute-1 sudo[218088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:00:40 compute-1 sudo[218088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:00:40 compute-1 sudo[218088]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:40.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:40 compute-1 sudo[218186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgoxfovsprhwnxvifievlnpibtynggji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101640.083651-2290-202606303897348/AnsiballZ_file.py'
Dec 07 10:00:40 compute-1 sudo[218186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:40 compute-1 python3.9[218188]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:40 compute-1 sudo[218186]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.132663) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101641132704, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1540, "num_deletes": 254, "total_data_size": 3994509, "memory_usage": 4039080, "flush_reason": "Manual Compaction"}
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Dec 07 10:00:41 compute-1 sudo[218338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydpebuvjowfbekbxjavdokqhtpmrmhqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101640.78264-2290-220147725016503/AnsiballZ_file.py'
Dec 07 10:00:41 compute-1 sudo[218338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101641152418, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 2600372, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18442, "largest_seqno": 19977, "table_properties": {"data_size": 2593844, "index_size": 3727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 12972, "raw_average_key_size": 18, "raw_value_size": 2580968, "raw_average_value_size": 3778, "num_data_blocks": 165, "num_entries": 683, "num_filter_entries": 683, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765101506, "oldest_key_time": 1765101506, "file_creation_time": 1765101641, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 19843 microseconds, and 7390 cpu microseconds.
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.152499) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 2600372 bytes OK
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.152528) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.155044) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.155111) EVENT_LOG_v1 {"time_micros": 1765101641155097, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.155142) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3987471, prev total WAL file size 3987471, number of live WAL files 2.
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.157055) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(2539KB)], [33(11MB)]
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101641157100, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14580378, "oldest_snapshot_seqno": -1}
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5094 keys, 14076844 bytes, temperature: kUnknown
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101641316745, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 14076844, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14041009, "index_size": 22029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 129386, "raw_average_key_size": 25, "raw_value_size": 13946882, "raw_average_value_size": 2737, "num_data_blocks": 906, "num_entries": 5094, "num_filter_entries": 5094, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765101641, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.317087) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 14076844 bytes
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.318759) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 91.2 rd, 88.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.4 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(11.0) write-amplify(5.4) OK, records in: 5618, records dropped: 524 output_compression: NoCompression
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.318775) EVENT_LOG_v1 {"time_micros": 1765101641318767, "job": 18, "event": "compaction_finished", "compaction_time_micros": 159831, "compaction_time_cpu_micros": 44988, "output_level": 6, "num_output_files": 1, "total_output_size": 14076844, "num_input_records": 5618, "num_output_records": 5094, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101641319243, "job": 18, "event": "table_file_deletion", "file_number": 35}
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101641321009, "job": 18, "event": "table_file_deletion", "file_number": 33}
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.156921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.321037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.321041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.321042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.321044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:00:41 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:00:41.321045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:00:41 compute-1 python3.9[218340]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:41 compute-1 sudo[218338]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:00:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:41.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:00:41 compute-1 sudo[218490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfelgfdzckpsovwrkmylzzhiwhbmrhhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101641.5264876-2290-51851298189628/AnsiballZ_file.py'
Dec 07 10:00:41 compute-1 sudo[218490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:42 compute-1 python3.9[218492]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:42 compute-1 sudo[218490]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:42 compute-1 ceph-mon[80077]: pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 10:00:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:00:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:42.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:00:42 compute-1 sudo[218643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exqrhveqyqzicvejqkpkquaepcavssaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101642.3461037-2290-257726144560528/AnsiballZ_file.py'
Dec 07 10:00:42 compute-1 sudo[218643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:42 compute-1 python3.9[218645]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:42 compute-1 sudo[218643]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:00:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:43.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:43 compute-1 podman[218746]: 2025-12-07 10:00:43.574462651 +0000 UTC m=+0.076191016 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 07 10:00:43 compute-1 sudo[218814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsgeyykkaphqbtyfbcbgoqwppekxhpci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101643.0926466-2290-161944992938949/AnsiballZ_file.py'
Dec 07 10:00:43 compute-1 sudo[218814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:43 compute-1 python3.9[218816]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:43 compute-1 sudo[218814]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 10:00:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 9124 writes, 36K keys, 9124 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9124 writes, 2040 syncs, 4.47 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 793 writes, 1304 keys, 793 commit groups, 1.0 writes per commit group, ingest: 0.44 MB, 0.00 MB/s
                                           Interval WAL: 793 writes, 362 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 07 10:00:44 compute-1 sudo[218841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:00:44 compute-1 sudo[218841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:00:44 compute-1 sudo[218841]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:44 compute-1 ceph-mon[80077]: pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:00:44 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:00:44 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:00:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:44.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:44 compute-1 sudo[218992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkrqnbrzduxjcpipjrnmgpuxgwewtrbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101644.2935574-2461-238684713563809/AnsiballZ_file.py'
Dec 07 10:00:44 compute-1 sudo[218992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:44 compute-1 python3.9[218994]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:44 compute-1 sudo[218992]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:00:45 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 9.
Dec 07 10:00:45 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:00:45 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.418s CPU time.
Dec 07 10:00:45 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 10:00:45 compute-1 sudo[219167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxfpnqtddgegjcbucovbnqsrhlbdchfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101645.020206-2461-253353115924410/AnsiballZ_file.py'
Dec 07 10:00:45 compute-1 sudo[219167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:45 compute-1 podman[219194]: 2025-12-07 10:00:45.530014767 +0000 UTC m=+0.055512454 container create dbd5b9b1c87bd3c09e600cecad2f8feb85993ca17033a76a304da5ec31dd0904 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 10:00:45 compute-1 python3.9[219173]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:45.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:45 compute-1 sudo[219167]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1685df81c2c1c59e98be18fa04fd3e85fdfce7a2c09507d841db08345b0643db/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 10:00:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1685df81c2c1c59e98be18fa04fd3e85fdfce7a2c09507d841db08345b0643db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 10:00:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1685df81c2c1c59e98be18fa04fd3e85fdfce7a2c09507d841db08345b0643db/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:00:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1685df81c2c1c59e98be18fa04fd3e85fdfce7a2c09507d841db08345b0643db/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:00:45 compute-1 podman[219194]: 2025-12-07 10:00:45.500684108 +0000 UTC m=+0.026181785 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 10:00:45 compute-1 podman[219194]: 2025-12-07 10:00:45.612871025 +0000 UTC m=+0.138368772 container init dbd5b9b1c87bd3c09e600cecad2f8feb85993ca17033a76a304da5ec31dd0904 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec 07 10:00:45 compute-1 podman[219194]: 2025-12-07 10:00:45.619517085 +0000 UTC m=+0.145014772 container start dbd5b9b1c87bd3c09e600cecad2f8feb85993ca17033a76a304da5ec31dd0904 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:00:45 compute-1 bash[219194]: dbd5b9b1c87bd3c09e600cecad2f8feb85993ca17033a76a304da5ec31dd0904
Dec 07 10:00:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:45 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 10:00:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:45 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 10:00:45 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:00:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:45 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 10:00:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:45 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 10:00:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:45 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 10:00:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:45 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 10:00:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:45 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 10:00:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:45 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:00:46 compute-1 sudo[219402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmjrohopumhuzxrubkpkijxaplhipski ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101645.740979-2461-208211383912771/AnsiballZ_file.py'
Dec 07 10:00:46 compute-1 sudo[219402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:46.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:46 compute-1 ceph-mon[80077]: pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 10:00:46 compute-1 python3.9[219405]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:46 compute-1 sudo[219402]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:47 compute-1 sudo[219555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpsyfqfqkghmzgtdviqtxustqlitvuzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101647.1433141-2461-191505949129729/AnsiballZ_file.py'
Dec 07 10:00:47 compute-1 sudo[219555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:47.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:47 compute-1 python3.9[219557]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:47 compute-1 sudo[219555]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:47 compute-1 ceph-mon[80077]: pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:00:48 compute-1 sudo[219708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhebfctyvzulpxbzanjdckmlptflbjat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101647.797411-2461-594089798683/AnsiballZ_file.py'
Dec 07 10:00:48 compute-1 sudo[219708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:48 compute-1 python3.9[219710]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:48 compute-1 sudo[219708]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:48.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:48 compute-1 sudo[219860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqcaqbckfxhloopewctvwsxpsktdauin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101648.5104704-2461-123658997013614/AnsiballZ_file.py'
Dec 07 10:00:48 compute-1 sudo[219860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:48 compute-1 python3.9[219862]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:49 compute-1 sudo[219860]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:49 compute-1 sudo[220012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjlszkylheumpxdujaovfgcsozicnsrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101649.1452186-2461-87393636193100/AnsiballZ_file.py'
Dec 07 10:00:49 compute-1 sudo[220012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:00:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:49.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:00:49 compute-1 python3.9[220014]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:49 compute-1 sudo[220012]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:49 compute-1 ceph-mon[80077]: pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:00:50 compute-1 sudo[220164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsomfnklqnvbtisacvzoshbilpfivtvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101649.807518-2461-243164731861997/AnsiballZ_file.py'
Dec 07 10:00:50 compute-1 sudo[220164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:00:50 compute-1 python3.9[220167]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:00:50 compute-1 sudo[220164]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:50.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:51 compute-1 sudo[220317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llvvraouibjurylkbutxhtmbomsfppso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101650.7459054-2635-244411822702924/AnsiballZ_command.py'
Dec 07 10:00:51 compute-1 sudo[220317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:51 compute-1 python3.9[220319]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 10:00:51 compute-1 sudo[220317]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:51.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:51 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:00:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:51 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:00:52 compute-1 ceph-mon[80077]: pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 10:00:52 compute-1 python3.9[220471]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 07 10:00:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:52.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:52 compute-1 sudo[220622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uigbbpqvxtcjsifggbagmvrbsvrflgva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101652.630936-2689-58601928997170/AnsiballZ_systemd_service.py'
Dec 07 10:00:52 compute-1 sudo[220622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:53 compute-1 python3.9[220624]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 07 10:00:53 compute-1 systemd[1]: Reloading.
Dec 07 10:00:53 compute-1 systemd-rc-local-generator[220653]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 10:00:53 compute-1 systemd-sysv-generator[220657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 10:00:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:53.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:53 compute-1 sudo[220622]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:53 compute-1 ceph-mon[80077]: pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 10:00:54 compute-1 sudo[220811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arlbpredsnipbjcigxgwnutxsxdukehm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101653.9872394-2713-141180874677894/AnsiballZ_command.py'
Dec 07 10:00:54 compute-1 sudo[220811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:54.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:54 compute-1 python3.9[220813]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 10:00:54 compute-1 sudo[220811]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:55 compute-1 sudo[220964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxtuuimyiptbsnxjkcklpttdqlopkmuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101654.7013164-2713-148124777314838/AnsiballZ_command.py'
Dec 07 10:00:55 compute-1 sudo[220964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:00:55 compute-1 python3.9[220966]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 10:00:55 compute-1 sudo[220964]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:55.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:55 compute-1 sudo[221117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghhkzvawsotjxwkaibmmrrjqrgycmpwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101655.4203842-2713-97517241926263/AnsiballZ_command.py'
Dec 07 10:00:55 compute-1 sudo[221117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:55 compute-1 python3.9[221119]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 10:00:55 compute-1 sudo[221117]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:55 compute-1 ceph-mon[80077]: pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:00:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:00:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:56.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:00:56 compute-1 sudo[221271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kywnnksiafqvrfxqwnuycqcwakiksmjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101656.1107478-2713-159239389296017/AnsiballZ_command.py'
Dec 07 10:00:56 compute-1 sudo[221271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:56 compute-1 python3.9[221273]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 10:00:56 compute-1 sudo[221271]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:57 compute-1 sudo[221424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llnxpwhoehvrnemupmmkwoiscygikpyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101656.7984302-2713-133303308980005/AnsiballZ_command.py'
Dec 07 10:00:57 compute-1 sudo[221424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:57 compute-1 python3.9[221426]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 10:00:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:00:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:57.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 10:00:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:57 : epoch 6935504d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:00:58 compute-1 ceph-mon[80077]: pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 10:00:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:00:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:58 : epoch 6935504d : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:58 compute-1 sudo[221424]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:00:58.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:58 compute-1 sudo[221593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzcfmogwlvweavliwktrykdyelnfzvtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101658.4615524-2713-73715097588299/AnsiballZ_command.py'
Dec 07 10:00:58 compute-1 sudo[221593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:58 : epoch 6935504d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6480014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:58 compute-1 python3.9[221595]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 10:00:58 compute-1 sudo[221593]: pam_unix(sudo:session): session closed for user root
Dec 07 10:00:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:00:59 : epoch 6935504d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc630000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:00:59 compute-1 sudo[221746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmxpaaoqsxcbvueskcfegclsmbyfylbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101659.049421-2713-58619937175619/AnsiballZ_command.py'
Dec 07 10:00:59 compute-1 sudo[221746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:00:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:00:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:00:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:00:59.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:00:59 compute-1 python3.9[221748]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 10:00:59 compute-1 sudo[221746]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:00 compute-1 ceph-mon[80077]: pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 10:01:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:01:00 : epoch 6935504d : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc62c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:00 compute-1 sudo[221902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsxnglrryfxmopfvdstfmsnxrkrpszso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101659.8123972-2713-131792595007525/AnsiballZ_command.py'
Dec 07 10:01:00 compute-1 sudo[221902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:01:00 compute-1 sudo[221905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:01:00 compute-1 sudo[221905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:01:00 compute-1 sudo[221905]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:00 compute-1 python3.9[221904]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 07 10:01:00 compute-1 sudo[221902]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:00 compute-1 sshd-session[221874]: Invalid user vps from 104.248.193.130 port 55012
Dec 07 10:01:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:00.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:00 compute-1 sshd-session[221874]: Connection closed by invalid user vps 104.248.193.130 port 55012 [preauth]
Dec 07 10:01:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:01:00 : epoch 6935504d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc650001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100101 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:01:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:01:01 : epoch 6935504d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6480021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:01 compute-1 CROND[221956]: (root) CMD (run-parts /etc/cron.hourly)
Dec 07 10:01:01 compute-1 run-parts[221959]: (/etc/cron.hourly) starting 0anacron
Dec 07 10:01:01 compute-1 run-parts[221965]: (/etc/cron.hourly) finished 0anacron
Dec 07 10:01:01 compute-1 CROND[221955]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 07 10:01:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:01.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:01 compute-1 sudo[222091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivwpdrhvejzccchhcecjnjsazieortqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101661.6963043-2920-83957407966253/AnsiballZ_file.py'
Dec 07 10:01:01 compute-1 sudo[222091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:01:02 : epoch 6935504d : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6480021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:02 compute-1 ceph-mon[80077]: pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:01:02 compute-1 python3.9[222093]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:02 compute-1 sudo[222091]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:02.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:02 compute-1 sudo[222244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejjoaolxgdqzecyqgyzxjwuubtfxktnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101662.3484669-2920-112013549407190/AnsiballZ_file.py'
Dec 07 10:01:02 compute-1 sudo[222244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:02 compute-1 python3.9[222246]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:02 compute-1 sudo[222244]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:01:02 : epoch 6935504d : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc62c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:03 compute-1 sudo[222396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uefdattwawqytnhadyjrjikbnoxmcuqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101662.9724953-2920-277807336176623/AnsiballZ_file.py'
Dec 07 10:01:03 compute-1 sudo[222396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:03 compute-1 python3.9[222398]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:01:03 : epoch 6935504d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6500025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:03 compute-1 sudo[222396]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:01:04 : epoch 6935504d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6480021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:04 compute-1 ceph-mon[80077]: pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 10:01:04 compute-1 sudo[222559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpcutbcjnvnhaftqtoiwovcflimeohke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101664.0061426-2986-32853938238642/AnsiballZ_file.py'
Dec 07 10:01:04 compute-1 sudo[222559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:04.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:04 compute-1 podman[222523]: 2025-12-07 10:01:04.468547414 +0000 UTC m=+0.168261325 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:01:04 compute-1 python3.9[222564]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:04 compute-1 sudo[222559]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:01:04 : epoch 6935504d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6300016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:04 compute-1 sudo[222728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yowvxlqdepkuuvxaagorcavpgoxmufsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101664.6572018-2986-262969304600376/AnsiballZ_file.py'
Dec 07 10:01:04 compute-1 sudo[222728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:05 compute-1 python3.9[222730]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:05 compute-1 sudo[222728]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:01:05 compute-1 kernel: ganesha.nfsd[221442]: segfault at 50 ip 00007fc7013c732e sp 00007fc6b57f9210 error 4 in libntirpc.so.5.8[7fc7013ac000+2c000] likely on CPU 1 (core 0, socket 1)
Dec 07 10:01:05 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 10:01:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[219211]: 07/12/2025 10:01:05 : epoch 6935504d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6300016a0 fd 39 proxy ignored for local
Dec 07 10:01:05 compute-1 systemd[1]: Started Process Core Dump (PID 222830/UID 0).
Dec 07 10:01:05 compute-1 sudo[222882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcgqzhtnbgzzmsarnnvdovabaiaddrxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101665.2564533-2986-181171798884842/AnsiballZ_file.py'
Dec 07 10:01:05 compute-1 sudo[222882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:05.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:05 compute-1 python3.9[222884]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:05 compute-1 sudo[222882]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:06 compute-1 ceph-mon[80077]: pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 10:01:06 compute-1 sudo[223035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwtcpzazywkmcrggfpnewetpcwkoprmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101665.9981103-2986-279998043055838/AnsiballZ_file.py'
Dec 07 10:01:06 compute-1 sudo[223035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:06.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:06 compute-1 python3.9[223037]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:06 compute-1 sudo[223035]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:06 compute-1 systemd-coredump[222831]: Process 219228 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007fc7013c732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 10:01:06 compute-1 systemd[1]: systemd-coredump@9-222830-0.service: Deactivated successfully.
Dec 07 10:01:06 compute-1 systemd[1]: systemd-coredump@9-222830-0.service: Consumed 1.183s CPU time.
Dec 07 10:01:06 compute-1 podman[223094]: 2025-12-07 10:01:06.722835488 +0000 UTC m=+0.028280891 container died dbd5b9b1c87bd3c09e600cecad2f8feb85993ca17033a76a304da5ec31dd0904 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec 07 10:01:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-1685df81c2c1c59e98be18fa04fd3e85fdfce7a2c09507d841db08345b0643db-merged.mount: Deactivated successfully.
Dec 07 10:01:06 compute-1 podman[223094]: 2025-12-07 10:01:06.766325973 +0000 UTC m=+0.071771366 container remove dbd5b9b1c87bd3c09e600cecad2f8feb85993ca17033a76a304da5ec31dd0904 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 07 10:01:06 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 10:01:06 compute-1 sudo[223234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xndrlobyemiybojbpqrgdfralhzfctip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101666.6616862-2986-275291542690228/AnsiballZ_file.py'
Dec 07 10:01:06 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 10:01:06 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.617s CPU time.
Dec 07 10:01:06 compute-1 sudo[223234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:07 compute-1 python3.9[223237]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:07 compute-1 sudo[223234]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:07.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:07 compute-1 sudo[223388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srjhfnbvflzalubezmtzphxehtgboipn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101667.3740396-2986-102903138027954/AnsiballZ_file.py'
Dec 07 10:01:07 compute-1 sudo[223388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:07 compute-1 python3.9[223390]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:07 compute-1 sudo[223388]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:08 compute-1 ceph-mon[80077]: pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:01:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:08.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:08 compute-1 sudo[223550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnhubybtoazgqfduuhnevofizvmygwft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101668.1109388-2986-110826112875595/AnsiballZ_file.py'
Dec 07 10:01:08 compute-1 sudo[223550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:08 compute-1 podman[223515]: 2025-12-07 10:01:08.492485269 +0000 UTC m=+0.070902322 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:01:08 compute-1 python3.9[223562]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:08 compute-1 sudo[223550]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:09.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:10 compute-1 ceph-mon[80077]: pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:01:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:01:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:10.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100111 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:01:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:11.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:12 compute-1 ceph-mon[80077]: pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:01:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:12.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:01:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:01:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:13.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:01:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 10:01:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3716 writes, 20K keys, 3716 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.04 MB/s
                                           Cumulative WAL: 3716 writes, 3716 syncs, 1.00 writes per sync, written: 0.05 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1448 writes, 6579 keys, 1448 commit groups, 1.0 writes per commit group, ingest: 16.29 MB, 0.03 MB/s
                                           Interval WAL: 1448 writes, 1448 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     77.4      0.37              0.10         9    0.042       0      0       0.0       0.0
                                             L6      1/0   13.42 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5     87.5     75.8      1.33              0.32         8    0.166     38K   4126       0.0       0.0
                                            Sum      1/0   13.42 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     68.3     76.2      1.70              0.42        17    0.100     38K   4126       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.4     59.7     60.4      0.75              0.14         6    0.125     17K   1843       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     87.5     75.8      1.33              0.32         8    0.166     38K   4126       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     77.7      0.37              0.10         8    0.047       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.028, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.11 MB/s write, 0.11 GB read, 0.10 MB/s read, 1.7 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563169dd350#2 capacity: 304.00 MB usage: 5.88 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000133 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(338,5.55 MB,1.82725%) FilterBlock(17,116.11 KB,0.0372987%) IndexBlock(17,214.02 KB,0.0687499%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 07 10:01:14 compute-1 ceph-mon[80077]: pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:01:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:14.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:14 compute-1 sudo[223734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idiccrplvydxmlkrhktycnytfhxwswqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101673.9685278-3311-43885760337787/AnsiballZ_getent.py'
Dec 07 10:01:14 compute-1 sudo[223734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:14 compute-1 podman[223690]: 2025-12-07 10:01:14.487377301 +0000 UTC m=+0.072197998 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 07 10:01:14 compute-1 python3.9[223738]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 07 10:01:14 compute-1 sudo[223734]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:01:15 compute-1 sudo[223889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmyrzeczvoitbvuvylvalkixzvobrsgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101674.9745588-3335-30992507493080/AnsiballZ_group.py'
Dec 07 10:01:15 compute-1 sudo[223889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:15.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:15 compute-1 python3.9[223891]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 07 10:01:15 compute-1 groupadd[223892]: group added to /etc/group: name=nova, GID=42436
Dec 07 10:01:15 compute-1 groupadd[223892]: group added to /etc/gshadow: name=nova
Dec 07 10:01:15 compute-1 groupadd[223892]: new group: name=nova, GID=42436
Dec 07 10:01:15 compute-1 sudo[223889]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:16 compute-1 ceph-mon[80077]: pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:01:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:01:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:16.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:01:16 compute-1 sudo[224048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pansdspemacdvxaqpwlkhnrwgpxfnxxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101676.0854847-3359-217519359967424/AnsiballZ_user.py'
Dec 07 10:01:16 compute-1 sudo[224048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:16 compute-1 python3.9[224050]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 07 10:01:16 compute-1 useradd[224052]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 07 10:01:16 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 10:01:16 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 10:01:16 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 10:01:16 compute-1 useradd[224052]: add 'nova' to group 'libvirt'
Dec 07 10:01:16 compute-1 useradd[224052]: add 'nova' to shadow group 'libvirt'
Dec 07 10:01:16 compute-1 sudo[224048]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:16 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 10.
Dec 07 10:01:16 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:01:16 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.617s CPU time.
Dec 07 10:01:16 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 10:01:17 compute-1 podman[224132]: 2025-12-07 10:01:17.286394685 +0000 UTC m=+0.058789943 container create 4ca9ca4a7dc30b1e7d3b4bbd1d4f91f2eb46a55a076a7e2e910639b56372058d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 07 10:01:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938e354b0b02bc182524b41e35a3578a0c9f6143af19ee7b21802fbec34d8f5c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 10:01:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938e354b0b02bc182524b41e35a3578a0c9f6143af19ee7b21802fbec34d8f5c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:01:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938e354b0b02bc182524b41e35a3578a0c9f6143af19ee7b21802fbec34d8f5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 10:01:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/938e354b0b02bc182524b41e35a3578a0c9f6143af19ee7b21802fbec34d8f5c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:01:17 compute-1 podman[224132]: 2025-12-07 10:01:17.353606226 +0000 UTC m=+0.126001494 container init 4ca9ca4a7dc30b1e7d3b4bbd1d4f91f2eb46a55a076a7e2e910639b56372058d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 10:01:17 compute-1 podman[224132]: 2025-12-07 10:01:17.262186955 +0000 UTC m=+0.034582213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 10:01:17 compute-1 podman[224132]: 2025-12-07 10:01:17.361515501 +0000 UTC m=+0.133910729 container start 4ca9ca4a7dc30b1e7d3b4bbd1d4f91f2eb46a55a076a7e2e910639b56372058d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 07 10:01:17 compute-1 bash[224132]: 4ca9ca4a7dc30b1e7d3b4bbd1d4f91f2eb46a55a076a7e2e910639b56372058d
Dec 07 10:01:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:17 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 10:01:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:17 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 10:01:17 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:01:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:17 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 10:01:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:17 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 10:01:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:17 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 10:01:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:17 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 10:01:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:17 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 10:01:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:17 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:01:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:17.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:17 compute-1 sshd-session[224189]: Accepted publickey for zuul from 192.168.122.30 port 45820 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 10:01:17 compute-1 systemd-logind[796]: New session 54 of user zuul.
Dec 07 10:01:18 compute-1 systemd[1]: Started Session 54 of User zuul.
Dec 07 10:01:18 compute-1 sshd-session[224189]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 10:01:18 compute-1 sshd-session[224192]: Received disconnect from 192.168.122.30 port 45820:11: disconnected by user
Dec 07 10:01:18 compute-1 sshd-session[224192]: Disconnected from user zuul 192.168.122.30 port 45820
Dec 07 10:01:18 compute-1 sshd-session[224189]: pam_unix(sshd:session): session closed for user zuul
Dec 07 10:01:18 compute-1 systemd[1]: session-54.scope: Deactivated successfully.
Dec 07 10:01:18 compute-1 systemd-logind[796]: Session 54 logged out. Waiting for processes to exit.
Dec 07 10:01:18 compute-1 systemd-logind[796]: Removed session 54.
Dec 07 10:01:18 compute-1 ceph-mon[80077]: pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:01:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:18.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:18 compute-1 python3.9[224343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 10:01:19 compute-1 python3.9[224464]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101678.3996465-3435-199820322540233/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:19.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:01:20 compute-1 ceph-mon[80077]: pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:01:20 compute-1 python3.9[224614]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 10:01:20 compute-1 sudo[224618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:01:20 compute-1 sudo[224618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:01:20 compute-1 sudo[224618]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:20.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:20 compute-1 python3.9[224716]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:21 compute-1 python3.9[224866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 10:01:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:21.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:21 compute-1 python3.9[224987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101680.9134316-3435-10105929410913/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:22 compute-1 ceph-mon[80077]: pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:01:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:22.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:22 compute-1 python3.9[225138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 10:01:23 compute-1 python3.9[225259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101682.164899-3435-122143458260701/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:23 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:01:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:23 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:01:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:23.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:23 compute-1 python3.9[225409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 10:01:24 compute-1 ceph-mon[80077]: pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:01:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:24 compute-1 python3.9[225531]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101683.3995812-3435-216272316226732/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:01:25 compute-1 python3.9[225681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 10:01:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:25.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:25 compute-1 python3.9[225802]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101684.7074404-3435-30708707768406/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:26 compute-1 ceph-mon[80077]: pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:01:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:26.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:27 compute-1 sudo[225953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuakeogdetaozzvwpnrkgrglcdmtgzln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101686.8118474-3683-4661531668040/AnsiballZ_file.py'
Dec 07 10:01:27 compute-1 sudo[225953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:27 compute-1 python3.9[225955]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:01:27 compute-1 sudo[225953]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:27.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:27 compute-1 sudo[226105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpaxounjuqywtcnclibnjlekoxewrcrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101687.5337596-3707-51619529436218/AnsiballZ_copy.py'
Dec 07 10:01:27 compute-1 sudo[226105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:28 compute-1 python3.9[226107]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:01:28 compute-1 sudo[226105]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:28 compute-1 ceph-mon[80077]: pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:01:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:01:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:28.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:28 compute-1 sudo[226258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-logyyhzqdfscvfbsxrhemyyekuazwtwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101688.412935-3731-96396294522096/AnsiballZ_stat.py'
Dec 07 10:01:28 compute-1 sudo[226258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:28 compute-1 python3.9[226260]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 10:01:28 compute-1 sudo[226258]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 10:01:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:29 : epoch 6935506d : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:01:29 compute-1 sudo[226422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhhqnedmpixtuyfgsrosbmwccripiwgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101689.2335055-3755-56550703151494/AnsiballZ_stat.py'
Dec 07 10:01:29 compute-1 sudo[226422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:29.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:29 compute-1 python3.9[226424]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 10:01:29 compute-1 sudo[226422]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:30 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9634000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:30 compute-1 sudo[226548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnzjvzbjjdemceaihtpuxlszwzwxygtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101689.2335055-3755-56550703151494/AnsiballZ_copy.py'
Dec 07 10:01:30 compute-1 sudo[226548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:01:30 compute-1 ceph-mon[80077]: pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:01:30 compute-1 python3.9[226550]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765101689.2335055-3755-56550703151494/.source _original_basename=.to49m_fu follow=False checksum=ceb0527be95508205146a6bc364d225697fd742c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 07 10:01:30 compute-1 sudo[226548]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:30.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:30 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:31 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9610000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:31 compute-1 python3.9[226702]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 10:01:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:31.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:32 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f962c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:32 compute-1 python3.9[226854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 10:01:32 compute-1 ceph-mon[80077]: pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:01:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:32.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:32 compute-1 python3.9[226976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101691.7923138-3833-79643182959795/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=81f1f28d070b2613355f782b83a5777fdba9540e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:32 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100133 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:01:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:33 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:33.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:33 compute-1 python3.9[227126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 07 10:01:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:34 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:34 compute-1 ceph-mon[80077]: pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:01:34 compute-1 python3.9[227248]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765101693.3814082-3879-227094494232492/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=2efe6ae78bce1c26d2c384be079fa366810076ad backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 07 10:01:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:34.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:34 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f962c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:01:35 compute-1 sudo[227408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stuchujodttfncjdkjdodowlqhciyinh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101695.1193202-3929-276091933576071/AnsiballZ_container_config_data.py'
Dec 07 10:01:35 compute-1 sudo[227408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:35 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:35 compute-1 podman[227372]: 2025-12-07 10:01:35.462043238 +0000 UTC m=+0.087080624 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:01:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:35.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:35 compute-1 python3.9[227416]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 07 10:01:35 compute-1 sudo[227408]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:36 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:36 compute-1 ceph-mon[80077]: pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:01:36 compute-1 sudo[227574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udwtvjnpbgfcixgwslgtjuzhylrrbdbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101696.076611-3956-50622266531692/AnsiballZ_container_config_hash.py'
Dec 07 10:01:36 compute-1 sudo[227574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:36.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100136 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:01:36 compute-1 python3.9[227576]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 07 10:01:36 compute-1 sudo[227574]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:36 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96100016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:37 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f962c0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:37 compute-1 sudo[227726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udacxzbhpakwtlkiczpmrdmhnznyrebc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765101697.154392-3986-27777447541416/AnsiballZ_edpm_container_manage.py'
Dec 07 10:01:37 compute-1 sudo[227726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:37.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:37 compute-1 python3[227728]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 07 10:01:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:38 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:38 compute-1 ceph-mon[80077]: pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:01:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:38.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:01:38.633 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:01:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:01:38.633 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:01:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:01:38.634 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:01:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:38 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:39 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:39 compute-1 podman[227768]: 2025-12-07 10:01:39.589518683 +0000 UTC m=+0.086370433 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 07 10:01:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:39.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:40 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9608000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:01:40 compute-1 sudo[227804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:01:40 compute-1 sudo[227804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:01:40 compute-1 sudo[227804]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:40.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:40 compute-1 ceph-mon[80077]: pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:01:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:40 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9610002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:41 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:41 compute-1 ceph-mon[80077]: pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:01:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:41.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:42 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:42.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:42 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:01:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:43 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9610002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:43.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:44 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96140030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:44 compute-1 sudo[227847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:01:44 compute-1 sudo[227847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:01:44 compute-1 sudo[227847]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:44 compute-1 sudo[227872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:01:44 compute-1 sudo[227872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:01:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:44.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:44 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:44 compute-1 sshd-session[227897]: Invalid user testuser from 104.248.193.130 port 44358
Dec 07 10:01:45 compute-1 sshd-session[227897]: Connection closed by invalid user testuser 104.248.193.130 port 44358 [preauth]
Dec 07 10:01:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:45 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:45.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:46 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9610002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:46 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:01:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:46.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:46 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96140030a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:47 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:47.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:48 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:48.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:48 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:48 compute-1 ceph-mon[80077]: pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 07 10:01:49 compute-1 podman[227912]: 2025-12-07 10:01:49.042975076 +0000 UTC m=+4.087661661 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:01:49 compute-1 podman[227743]: 2025-12-07 10:01:49.064865423 +0000 UTC m=+11.198115084 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec 07 10:01:49 compute-1 podman[227980]: 2025-12-07 10:01:49.203560021 +0000 UTC m=+0.048032600 container create 355252ff44d2290864a72f74aecfe53741fa03db42f41b32fff0149217c5d70c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:01:49 compute-1 podman[227980]: 2025-12-07 10:01:49.177972454 +0000 UTC m=+0.022445063 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec 07 10:01:49 compute-1 python3[227728]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 07 10:01:49 compute-1 sudo[227872]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:49 compute-1 sudo[227726]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:49 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:49 compute-1 ceph-mon[80077]: pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 10:01:49 compute-1 ceph-mon[80077]: pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:01:49 compute-1 ceph-mon[80077]: pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:01:49 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:01:49 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:01:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:49.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:49 compute-1 sudo[228180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nedwlcmipdsgkottdhbfepqcmuiopuhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101709.512522-4010-151322925808193/AnsiballZ_stat.py'
Dec 07 10:01:49 compute-1 sudo[228180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:49 compute-1 python3.9[228182]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 10:01:50 compute-1 sudo[228180]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:50 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:50.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:01:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:01:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:01:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:01:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:01:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:01:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:01:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:50 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:51 compute-1 sudo[228335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbiigolspbbwxisxebdnjbdcpxxqpwpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101710.8217654-4046-62580240610505/AnsiballZ_container_config_data.py'
Dec 07 10:01:51 compute-1 sudo[228335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:51 compute-1 python3.9[228337]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 07 10:01:51 compute-1 sudo[228335]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:01:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:51 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:51.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:51 compute-1 ceph-mon[80077]: pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:01:52 compute-1 sudo[228487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpzirocoozsubbpfhkhthmepbruhgalf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101711.7829475-4073-267224186932674/AnsiballZ_container_config_hash.py'
Dec 07 10:01:52 compute-1 sudo[228487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:52 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:52 compute-1 python3.9[228489]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 07 10:01:52 compute-1 sudo[228487]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:52.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:52 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:53 : epoch 6935506d : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:01:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:53 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:53.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:53 compute-1 ceph-mon[80077]: pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:01:53 compute-1 sudo[228640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvbswytcvdqrenjimalafoyfzwiradsz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765101713.7002685-4103-138683414259951/AnsiballZ_edpm_container_manage.py'
Dec 07 10:01:53 compute-1 sudo[228640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:54 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:54 compute-1 python3[228642]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 07 10:01:54 compute-1 podman[228681]: 2025-12-07 10:01:54.408000167 +0000 UTC m=+0.022165035 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5
Dec 07 10:01:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:54.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:54 compute-1 podman[228681]: 2025-12-07 10:01:54.596184704 +0000 UTC m=+0.210349602 container create cdd17e06b5d5370f42d2aebef8d8030ee5a261133461f88e1569c7a0392804a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 07 10:01:54 compute-1 python3[228642]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5 kolla_start
Dec 07 10:01:54 compute-1 sudo[228640]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:54 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:55 compute-1 sudo[228869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jznkmdumiuwdoqtlixcakuwyltybvnui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101715.1047378-4127-198788502500278/AnsiballZ_stat.py'
Dec 07 10:01:55 compute-1 sudo[228869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:55 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:55 compute-1 python3.9[228871]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 10:01:55 compute-1 sudo[228869]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:55.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:55 compute-1 sudo[228873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:01:55 compute-1 sudo[228873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:01:55 compute-1 sudo[228873]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:55 compute-1 ceph-mon[80077]: pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 10:01:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:01:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:01:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:56 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:56 : epoch 6935506d : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:01:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:56 : epoch 6935506d : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:01:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:56 : epoch 6935506d : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:01:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:01:56 compute-1 sudo[229049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpabyntboxrsuujcxupjmhmiwipstfju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101716.0440652-4154-182872080628594/AnsiballZ_file.py'
Dec 07 10:01:56 compute-1 sudo[229049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:56.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:56 compute-1 python3.9[229051]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:01:56 compute-1 sudo[229049]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:56 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:57 compute-1 sudo[229200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mogxgdzumoeziosbswrpdtlwhtstquee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101716.7904568-4154-265774932387811/AnsiballZ_copy.py'
Dec 07 10:01:57 compute-1 sudo[229200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:57 compute-1 python3.9[229202]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765101716.7904568-4154-265774932387811/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 07 10:01:57 compute-1 sudo[229200]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:57 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:57.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:57 compute-1 sudo[229276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onxehgtrsiomenmtagnnzijdyqvfwbsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101716.7904568-4154-265774932387811/AnsiballZ_systemd.py'
Dec 07 10:01:57 compute-1 sudo[229276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:57 compute-1 ceph-mon[80077]: pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 10:01:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:01:58 compute-1 python3.9[229278]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 07 10:01:58 compute-1 systemd[1]: Reloading.
Dec 07 10:01:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:58 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:58 compute-1 systemd-rc-local-generator[229306]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 10:01:58 compute-1 systemd-sysv-generator[229309]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 10:01:58 compute-1 sudo[229276]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:01:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:01:58.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:01:58 compute-1 sudo[229389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsvtmumjnzzcyiikprisoksvrjogkyty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101716.7904568-4154-265774932387811/AnsiballZ_systemd.py'
Dec 07 10:01:58 compute-1 sudo[229389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:01:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:58 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:59 compute-1 python3.9[229391]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 07 10:01:59 compute-1 systemd[1]: Reloading.
Dec 07 10:01:59 compute-1 systemd-sysv-generator[229426]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 07 10:01:59 compute-1 systemd-rc-local-generator[229422]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 07 10:01:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:59 : epoch 6935506d : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:01:59 compute-1 systemd[1]: Starting nova_compute container...
Dec 07 10:01:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:01:59 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96080016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:01:59 compute-1 systemd[1]: Started libcrun container.
Dec 07 10:01:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a579cbbcc9c2946533011ae3acd058b6bc3c818efb25663cc922f9e26c5c8dc3/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 07 10:01:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a579cbbcc9c2946533011ae3acd058b6bc3c818efb25663cc922f9e26c5c8dc3/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 07 10:01:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a579cbbcc9c2946533011ae3acd058b6bc3c818efb25663cc922f9e26c5c8dc3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 07 10:01:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a579cbbcc9c2946533011ae3acd058b6bc3c818efb25663cc922f9e26c5c8dc3/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 07 10:01:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a579cbbcc9c2946533011ae3acd058b6bc3c818efb25663cc922f9e26c5c8dc3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 07 10:01:59 compute-1 podman[229431]: 2025-12-07 10:01:59.531577812 +0000 UTC m=+0.103804760 container init cdd17e06b5d5370f42d2aebef8d8030ee5a261133461f88e1569c7a0392804a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 07 10:01:59 compute-1 podman[229431]: 2025-12-07 10:01:59.552808925 +0000 UTC m=+0.125035813 container start cdd17e06b5d5370f42d2aebef8d8030ee5a261133461f88e1569c7a0392804a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 07 10:01:59 compute-1 podman[229431]: nova_compute
Dec 07 10:01:59 compute-1 nova_compute[229446]: + sudo -E kolla_set_configs
Dec 07 10:01:59 compute-1 systemd[1]: Started nova_compute container.
Dec 07 10:01:59 compute-1 sudo[229389]: pam_unix(sudo:session): session closed for user root
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Validating config file
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Copying service configuration files
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Deleting /etc/ceph
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Creating directory /etc/ceph
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /etc/ceph
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Writing out command to execute
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 07 10:01:59 compute-1 nova_compute[229446]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 07 10:01:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:01:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:01:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:01:59.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:01:59 compute-1 nova_compute[229446]: ++ cat /run_command
Dec 07 10:01:59 compute-1 nova_compute[229446]: + CMD=nova-compute
Dec 07 10:01:59 compute-1 nova_compute[229446]: + ARGS=
Dec 07 10:01:59 compute-1 nova_compute[229446]: + sudo kolla_copy_cacerts
Dec 07 10:01:59 compute-1 nova_compute[229446]: + [[ ! -n '' ]]
Dec 07 10:01:59 compute-1 nova_compute[229446]: + . kolla_extend_start
Dec 07 10:01:59 compute-1 nova_compute[229446]: Running command: 'nova-compute'
Dec 07 10:01:59 compute-1 nova_compute[229446]: + echo 'Running command: '\''nova-compute'\'''
Dec 07 10:01:59 compute-1 nova_compute[229446]: + umask 0022
Dec 07 10:01:59 compute-1 nova_compute[229446]: + exec nova-compute
Dec 07 10:01:59 compute-1 ceph-mon[80077]: pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 07 10:02:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:00 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:00.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:00 compute-1 sudo[229533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:02:00 compute-1 sudo[229533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:02:00 compute-1 sudo[229533]: pam_unix(sudo:session): session closed for user root
Dec 07 10:02:00 compute-1 python3.9[229634]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 10:02:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:00 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:02:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:01 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9610003b20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:01 compute-1 ceph-mon[80077]: pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:02:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:01.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:01 compute-1 nova_compute[229446]: 2025-12-07 10:02:01.807 229450 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 07 10:02:01 compute-1 nova_compute[229446]: 2025-12-07 10:02:01.808 229450 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 07 10:02:01 compute-1 nova_compute[229446]: 2025-12-07 10:02:01.808 229450 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 07 10:02:01 compute-1 nova_compute[229446]: 2025-12-07 10:02:01.808 229450 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 07 10:02:01 compute-1 nova_compute[229446]: 2025-12-07 10:02:01.959 229450 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:02:01 compute-1 nova_compute[229446]: 2025-12-07 10:02:01.982 229450 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:02:01 compute-1 nova_compute[229446]: 2025-12-07 10:02:01.983 229450 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 07 10:02:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:02 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9608003720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:02.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100202 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.584 229450 INFO nova.virt.driver [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 07 10:02:02 compute-1 python3.9[229789]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.700 229450 INFO nova.compute.provider_config [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.711 229450 DEBUG oslo_concurrency.lockutils [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.712 229450 DEBUG oslo_concurrency.lockutils [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.712 229450 DEBUG oslo_concurrency.lockutils [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.712 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.713 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.713 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.713 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.713 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.713 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.714 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.714 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.714 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.714 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.714 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.715 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.715 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.715 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.715 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.715 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.715 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.716 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.716 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.716 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.716 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.716 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.717 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.717 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.717 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.717 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.717 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.717 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.718 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.718 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.718 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.718 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.718 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.719 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.719 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.719 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.719 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.720 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.720 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.720 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.720 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.720 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.721 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.721 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.721 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.721 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.722 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.722 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.722 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.722 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.722 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.722 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.723 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.723 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.723 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.723 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.723 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.724 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.724 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.724 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.724 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.724 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.724 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.725 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.725 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.725 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.725 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.725 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.725 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.725 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.726 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.726 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.726 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.726 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.726 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.727 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.727 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.727 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.727 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.727 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.728 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.728 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.728 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.728 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.728 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.728 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.729 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.729 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.729 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.729 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.729 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.729 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.730 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.730 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.730 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.730 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.730 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.730 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.731 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.731 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.731 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.731 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.731 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.731 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.732 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.732 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.732 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.732 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.732 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.733 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.733 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.733 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.733 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.733 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.733 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.734 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.734 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.734 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.734 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.734 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.734 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.735 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.735 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.735 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.735 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.735 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.736 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.736 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.736 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.736 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.736 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.736 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.737 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.737 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.737 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.737 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.737 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.737 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.738 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.738 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.738 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.738 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.738 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.739 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.739 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.739 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.739 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.739 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.740 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.740 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.740 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.740 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.740 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.741 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.741 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.741 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.741 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.741 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.741 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.742 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.742 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.742 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.742 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.742 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.742 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.743 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.743 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.743 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.743 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.743 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.744 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.744 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.744 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.744 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.744 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.744 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.745 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.745 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.745 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.745 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.745 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.746 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.746 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.746 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.746 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.746 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.746 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.747 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.747 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.747 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.747 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.747 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.748 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.748 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.748 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.748 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.748 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.748 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.749 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.749 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.749 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.749 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.749 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.750 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.750 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.750 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.750 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.750 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.750 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.751 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.751 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.751 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.751 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.751 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.751 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.751 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.752 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.752 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.752 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.752 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.752 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.752 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.752 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.753 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.753 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.753 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.753 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.753 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.753 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.753 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.754 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.754 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.754 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.754 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.754 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.754 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.754 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.755 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.755 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.755 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.755 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.755 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.755 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.756 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.756 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.756 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.756 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.756 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.756 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.756 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.756 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.757 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.757 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.757 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.757 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.757 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.757 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.757 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.758 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.758 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.758 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.758 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.758 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.758 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.758 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.758 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.759 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.759 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.759 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.759 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.759 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.759 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.759 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.760 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.760 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.760 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.760 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.760 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.760 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.760 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.761 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.761 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.761 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.761 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.761 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.761 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.762 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.762 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.762 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.762 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.762 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.762 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.763 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.763 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.763 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.763 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.763 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.763 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.763 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.764 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.764 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.764 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.764 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.764 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.764 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.764 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.765 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.765 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.765 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.765 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.765 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.765 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.766 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.766 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.766 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.766 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.766 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.766 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.766 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.767 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.767 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.767 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.767 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.767 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.767 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.767 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.768 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.768 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.768 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.768 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.768 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.768 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.768 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.768 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.769 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.769 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.769 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.769 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.769 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.769 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.769 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.770 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.770 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.770 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.770 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.770 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.770 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.770 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.771 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.771 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.771 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.771 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.771 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.771 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.771 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.772 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.772 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.772 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.772 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.772 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.773 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.773 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.773 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.773 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.773 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.773 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.773 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.774 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.774 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.774 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.774 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.774 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.774 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.774 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.775 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.775 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.775 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.775 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.775 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.775 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.775 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.776 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.776 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.776 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.776 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.776 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.776 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.776 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.777 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.777 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.777 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.777 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.777 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.777 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.777 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.778 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.778 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.778 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.778 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.778 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.778 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.778 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.779 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.779 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.779 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.779 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.779 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.779 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.779 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.780 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.780 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.780 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.780 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.780 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.780 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.780 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.781 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.781 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.781 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.781 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.781 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.781 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.781 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.781 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.782 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.782 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.782 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.782 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.782 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.782 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.782 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.783 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.783 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.783 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.783 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.783 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.783 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.783 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.784 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.784 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.784 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.784 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.784 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.784 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.784 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.785 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.785 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.785 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.785 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.785 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.785 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.785 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.786 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.786 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.786 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.786 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.786 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.786 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.786 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.787 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.787 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.787 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.787 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.787 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.787 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.787 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.788 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.788 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.788 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.788 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.788 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.788 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.788 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.789 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.789 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.789 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.789 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.789 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.789 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.789 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.790 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.790 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.790 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.790 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.790 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.790 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.790 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.791 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.791 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.791 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.791 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.791 229450 WARNING oslo_config.cfg [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 07 10:02:02 compute-1 nova_compute[229446]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 07 10:02:02 compute-1 nova_compute[229446]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 07 10:02:02 compute-1 nova_compute[229446]: and ``live_migration_inbound_addr`` respectively.
Dec 07 10:02:02 compute-1 nova_compute[229446]: ).  Its value may be silently ignored in the future.
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.792 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.792 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.792 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.792 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.792 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.792 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.792 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.793 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.793 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.793 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.793 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.793 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.793 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.793 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.794 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.794 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.794 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.794 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.794 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.rbd_secret_uuid        = 75f4c9fd-539a-5e17-b55a-0a12a4e2736c log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.794 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.794 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.795 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.795 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.795 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.795 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.795 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.795 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.795 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.796 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.796 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.796 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.796 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.796 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.796 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.797 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.797 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.797 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.797 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.797 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.797 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.797 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.798 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.798 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.798 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.798 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.798 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.798 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.798 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.799 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.799 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.799 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.799 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.799 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.799 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.799 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.800 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.800 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.800 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.800 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.800 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.800 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.800 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.801 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.801 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.801 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.801 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.801 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.801 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.801 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.802 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.802 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.802 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.802 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.802 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.802 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.802 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.803 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.803 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.803 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.803 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.803 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.803 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.803 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.804 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.804 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.804 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.804 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.804 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.804 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.804 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.805 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.805 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.805 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.805 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.805 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.805 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.805 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.805 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.806 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.806 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.806 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.806 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.806 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.806 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.806 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.806 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.807 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.807 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.807 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.807 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.807 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.807 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.807 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.808 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.808 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.808 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.808 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.808 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.808 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.809 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.809 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.809 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.809 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.809 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.809 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.809 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.810 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.810 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.810 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.810 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.810 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.810 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.810 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.810 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.811 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.811 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.811 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.811 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.811 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.811 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.812 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.812 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.812 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.812 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.812 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.812 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.812 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.813 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.813 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.813 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.813 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.813 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.813 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.813 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.814 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.814 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.814 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.814 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.814 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.815 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.815 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.815 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.815 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.815 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.815 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.815 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.816 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.816 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.816 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.816 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.816 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.816 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.816 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.817 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.817 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.817 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.817 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.817 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.817 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.818 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.818 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.818 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.818 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.818 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.818 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.818 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.819 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.819 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.819 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.819 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.819 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.819 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.819 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.820 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.820 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.820 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.820 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.820 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.821 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.821 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.821 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.821 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.821 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.822 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.822 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.822 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.822 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.822 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.822 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.823 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.823 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.823 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.823 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.823 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.824 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.824 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.824 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.824 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.824 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.824 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.824 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.824 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.825 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.825 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.825 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.825 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.825 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.825 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.825 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.826 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.826 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.826 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.826 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.826 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.826 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.826 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.827 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.827 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.827 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.827 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.827 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.827 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.827 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.828 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.828 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.828 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.828 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.828 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.828 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.828 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.829 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.829 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.829 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.829 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.829 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.829 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.830 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.830 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.830 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.830 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.830 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.830 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.830 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.830 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.831 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.831 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.831 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.831 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.831 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.831 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.832 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.832 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.832 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.832 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.832 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.832 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.832 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.832 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.833 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.833 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.833 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.833 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.833 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.833 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.833 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.834 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.834 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.834 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.834 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.834 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.834 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.834 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.835 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.835 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.835 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.835 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.835 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.835 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.835 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.836 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.836 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.836 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.836 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.836 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.836 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.836 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.837 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.837 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.837 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.837 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.837 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.837 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.837 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.838 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.838 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.838 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.838 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.838 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.838 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.838 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.838 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.839 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.839 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.839 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.839 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.839 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.839 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.839 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.840 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.840 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.840 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.840 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.840 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.840 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.840 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.841 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.841 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.841 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.841 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.841 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.841 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.841 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.842 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.842 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.842 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.842 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.842 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.842 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.842 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.843 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.843 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.843 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.843 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.843 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.843 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.843 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.843 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.844 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.844 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.844 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.844 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.844 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.844 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.844 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.845 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.845 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.845 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.845 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.845 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.845 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.845 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.845 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.846 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.846 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.846 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.846 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.846 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.846 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.846 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.847 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.847 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.847 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.847 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.847 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.847 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.847 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.848 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.848 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.848 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.848 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.848 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.848 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.848 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.848 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.849 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.849 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.849 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.849 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.849 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.849 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.850 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.850 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.850 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.850 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.850 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.850 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.850 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.850 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.851 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.851 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.851 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.851 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.851 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.851 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.851 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.852 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.852 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.852 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.852 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.852 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.852 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.852 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.853 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.853 229450 DEBUG oslo_service.service [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.854 229450 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.866 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.867 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.867 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.867 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 07 10:02:02 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Dec 07 10:02:02 compute-1 systemd[1]: Started libvirt QEMU daemon.
Dec 07 10:02:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:02 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.941 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd456295b80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.943 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd456295b80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.944 229450 INFO nova.virt.libvirt.driver [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Connection event '1' reason 'None'
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.986 229450 WARNING nova.virt.libvirt.driver [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Dec 07 10:02:02 compute-1 nova_compute[229446]: 2025-12-07 10:02:02.987 229450 DEBUG nova.virt.libvirt.volume.mount [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 07 10:02:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:03 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:03 compute-1 python3.9[229991]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 07 10:02:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:03.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:03 compute-1 nova_compute[229446]: 2025-12-07 10:02:03.780 229450 INFO nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Libvirt host capabilities <capabilities>
Dec 07 10:02:03 compute-1 nova_compute[229446]: 
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <host>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <uuid>33deaee1-72d1-48f6-b57d-96104f1f436a</uuid>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <cpu>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <arch>x86_64</arch>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model>EPYC-Rome-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <vendor>AMD</vendor>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <microcode version='16777317'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <signature family='23' model='49' stepping='0'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='x2apic'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='tsc-deadline'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='osxsave'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='hypervisor'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='tsc_adjust'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='spec-ctrl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='stibp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='arch-capabilities'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='cmp_legacy'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='topoext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='virt-ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='lbrv'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='tsc-scale'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='vmcb-clean'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='pause-filter'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='pfthreshold'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='svme-addr-chk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='rdctl-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='skip-l1dfl-vmentry'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='mds-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature name='pschange-mc-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <pages unit='KiB' size='4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <pages unit='KiB' size='2048'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <pages unit='KiB' size='1048576'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </cpu>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <power_management>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <suspend_mem/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </power_management>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <iommu support='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <migration_features>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <live/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <uri_transports>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <uri_transport>tcp</uri_transport>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <uri_transport>rdma</uri_transport>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </uri_transports>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </migration_features>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <topology>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <cells num='1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <cell id='0'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:           <memory unit='KiB'>7864316</memory>
Dec 07 10:02:03 compute-1 nova_compute[229446]:           <pages unit='KiB' size='4'>1966079</pages>
Dec 07 10:02:03 compute-1 nova_compute[229446]:           <pages unit='KiB' size='2048'>0</pages>
Dec 07 10:02:03 compute-1 nova_compute[229446]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 07 10:02:03 compute-1 nova_compute[229446]:           <distances>
Dec 07 10:02:03 compute-1 nova_compute[229446]:             <sibling id='0' value='10'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:           </distances>
Dec 07 10:02:03 compute-1 nova_compute[229446]:           <cpus num='8'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:           </cpus>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         </cell>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </cells>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </topology>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <cache>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </cache>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <secmodel>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model>selinux</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <doi>0</doi>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </secmodel>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <secmodel>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model>dac</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <doi>0</doi>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </secmodel>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </host>
Dec 07 10:02:03 compute-1 nova_compute[229446]: 
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <guest>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <os_type>hvm</os_type>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <arch name='i686'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <wordsize>32</wordsize>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <domain type='qemu'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <domain type='kvm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </arch>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <features>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <pae/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <nonpae/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <acpi default='on' toggle='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <apic default='on' toggle='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <cpuselection/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <deviceboot/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <disksnapshot default='on' toggle='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <externalSnapshot/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </features>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </guest>
Dec 07 10:02:03 compute-1 nova_compute[229446]: 
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <guest>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <os_type>hvm</os_type>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <arch name='x86_64'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <wordsize>64</wordsize>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <domain type='qemu'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <domain type='kvm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </arch>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <features>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <acpi default='on' toggle='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <apic default='on' toggle='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <cpuselection/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <deviceboot/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <disksnapshot default='on' toggle='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <externalSnapshot/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </features>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </guest>
Dec 07 10:02:03 compute-1 nova_compute[229446]: 
Dec 07 10:02:03 compute-1 nova_compute[229446]: </capabilities>
Dec 07 10:02:03 compute-1 nova_compute[229446]: 
Dec 07 10:02:03 compute-1 nova_compute[229446]: 2025-12-07 10:02:03.787 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 07 10:02:03 compute-1 nova_compute[229446]: 2025-12-07 10:02:03.810 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 07 10:02:03 compute-1 nova_compute[229446]: <domainCapabilities>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <path>/usr/libexec/qemu-kvm</path>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <domain>kvm</domain>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <arch>i686</arch>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <vcpu max='240'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <iothreads supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <os supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <enum name='firmware'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <loader supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>rom</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pflash</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='readonly'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>yes</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>no</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='secure'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>no</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </loader>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </os>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <cpu>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='host-passthrough' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='hostPassthroughMigratable'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>on</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>off</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='maximum' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='maximumMigratable'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>on</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>off</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='host-model' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <vendor>AMD</vendor>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='x2apic'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='tsc-deadline'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='hypervisor'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='tsc_adjust'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='spec-ctrl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='stibp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='cmp_legacy'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='overflow-recov'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='succor'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='ibrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='amd-ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='virt-ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='lbrv'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='tsc-scale'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='vmcb-clean'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='flushbyasid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='pause-filter'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='pfthreshold'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='svme-addr-chk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='disable' name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='custom' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cooperlake'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cooperlake-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cooperlake-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Dhyana-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Genoa'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amd-psfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='auto-ibrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='stibp-always-on'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Genoa-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amd-psfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='auto-ibrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='stibp-always-on'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Milan'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Milan-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Milan-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amd-psfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='stibp-always-on'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='GraniteRapids'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='prefetchiti'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='GraniteRapids-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='prefetchiti'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='GraniteRapids-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10-128'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10-256'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10-512'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='prefetchiti'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v6'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v7'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='IvyBridge'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='IvyBridge-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='IvyBridge-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='IvyBridge-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='KnightsMill'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512er'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512pf'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='KnightsMill-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512er'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512pf'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Opteron_G4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Opteron_G4-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Opteron_G5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tbm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Opteron_G5-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tbm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SierraForest'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cmpccxadd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SierraForest-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cmpccxadd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='athlon'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='athlon-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='core2duo'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='core2duo-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='coreduo'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='coreduo-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='n270'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='n270-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='phenom'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='phenom-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </cpu>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <memoryBacking supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <enum name='sourceType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>file</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>anonymous</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>memfd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </memoryBacking>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <devices>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <disk supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='diskDevice'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>disk</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>cdrom</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>floppy</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>lun</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='bus'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>ide</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>fdc</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>scsi</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>usb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>sata</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio-transitional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio-non-transitional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </disk>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <graphics supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vnc</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>egl-headless</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>dbus</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </graphics>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <video supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='modelType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vga</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>cirrus</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>none</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>bochs</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>ramfb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </video>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <hostdev supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='mode'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>subsystem</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='startupPolicy'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>default</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>mandatory</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>requisite</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>optional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='subsysType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>usb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pci</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>scsi</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='capsType'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='pciBackend'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </hostdev>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <rng supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio-transitional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio-non-transitional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendModel'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>random</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>egd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>builtin</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </rng>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <filesystem supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='driverType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>path</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>handle</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtiofs</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </filesystem>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <tpm supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tpm-tis</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tpm-crb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendModel'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>emulator</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>external</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendVersion'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>2.0</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </tpm>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <redirdev supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='bus'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>usb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </redirdev>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <channel supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pty</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>unix</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </channel>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <crypto supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>qemu</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendModel'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>builtin</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </crypto>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <interface supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>default</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>passt</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </interface>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <panic supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>isa</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>hyperv</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </panic>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <console supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>null</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vc</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pty</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>dev</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>file</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pipe</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>stdio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>udp</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tcp</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>unix</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>qemu-vdagent</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>dbus</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </console>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </devices>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <features>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <gic supported='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <vmcoreinfo supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <genid supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <backingStoreInput supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <backup supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <async-teardown supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <ps2 supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <sev supported='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <sgx supported='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <hyperv supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='features'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>relaxed</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vapic</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>spinlocks</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vpindex</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>runtime</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>synic</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>stimer</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>reset</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vendor_id</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>frequencies</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>reenlightenment</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tlbflush</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>ipi</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>avic</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>emsr_bitmap</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>xmm_input</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <defaults>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <spinlocks>4095</spinlocks>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <stimer_direct>on</stimer_direct>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <tlbflush_direct>on</tlbflush_direct>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <tlbflush_extended>on</tlbflush_extended>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </defaults>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </hyperv>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <launchSecurity supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='sectype'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tdx</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </launchSecurity>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </features>
Dec 07 10:02:03 compute-1 nova_compute[229446]: </domainCapabilities>
Dec 07 10:02:03 compute-1 nova_compute[229446]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 07 10:02:03 compute-1 nova_compute[229446]: 2025-12-07 10:02:03.815 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 07 10:02:03 compute-1 nova_compute[229446]: <domainCapabilities>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <path>/usr/libexec/qemu-kvm</path>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <domain>kvm</domain>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <arch>i686</arch>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <vcpu max='4096'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <iothreads supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <os supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <enum name='firmware'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <loader supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>rom</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pflash</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='readonly'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>yes</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>no</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='secure'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>no</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </loader>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </os>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <cpu>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='host-passthrough' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='hostPassthroughMigratable'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>on</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>off</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='maximum' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='maximumMigratable'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>on</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>off</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='host-model' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <vendor>AMD</vendor>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='x2apic'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='tsc-deadline'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='hypervisor'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='tsc_adjust'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='spec-ctrl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='stibp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='cmp_legacy'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='overflow-recov'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='succor'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='ibrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='amd-ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='virt-ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='lbrv'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='tsc-scale'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='vmcb-clean'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='flushbyasid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='pause-filter'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='pfthreshold'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='svme-addr-chk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='disable' name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='custom' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cooperlake'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cooperlake-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cooperlake-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Dhyana-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Genoa'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amd-psfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='auto-ibrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='stibp-always-on'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Genoa-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amd-psfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='auto-ibrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='stibp-always-on'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Milan'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Milan-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Milan-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amd-psfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='stibp-always-on'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='GraniteRapids'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='prefetchiti'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='GraniteRapids-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='prefetchiti'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='GraniteRapids-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10-128'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10-256'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10-512'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='prefetchiti'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v6'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v7'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='IvyBridge'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='IvyBridge-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='IvyBridge-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='IvyBridge-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='KnightsMill'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512er'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512pf'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='KnightsMill-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512er'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512pf'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Opteron_G4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Opteron_G4-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Opteron_G5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tbm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Opteron_G5-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tbm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SierraForest'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cmpccxadd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SierraForest-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cmpccxadd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='athlon'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='athlon-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='core2duo'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='core2duo-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='coreduo'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='coreduo-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='n270'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='n270-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='phenom'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='phenom-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </cpu>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <memoryBacking supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <enum name='sourceType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>file</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>anonymous</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>memfd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </memoryBacking>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <devices>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <disk supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='diskDevice'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>disk</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>cdrom</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>floppy</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>lun</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='bus'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>fdc</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>scsi</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>usb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>sata</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio-transitional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio-non-transitional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </disk>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <graphics supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vnc</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>egl-headless</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>dbus</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </graphics>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <video supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='modelType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vga</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>cirrus</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>none</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>bochs</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>ramfb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </video>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <hostdev supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='mode'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>subsystem</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='startupPolicy'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>default</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>mandatory</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>requisite</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>optional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='subsysType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>usb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pci</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>scsi</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='capsType'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='pciBackend'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </hostdev>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <rng supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio-transitional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio-non-transitional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendModel'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>random</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>egd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>builtin</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </rng>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <filesystem supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='driverType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>path</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>handle</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtiofs</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </filesystem>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <tpm supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tpm-tis</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tpm-crb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendModel'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>emulator</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>external</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendVersion'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>2.0</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </tpm>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <redirdev supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='bus'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>usb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </redirdev>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <channel supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pty</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>unix</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </channel>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <crypto supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>qemu</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendModel'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>builtin</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </crypto>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <interface supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>default</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>passt</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </interface>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <panic supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>isa</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>hyperv</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </panic>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <console supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>null</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vc</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pty</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>dev</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>file</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pipe</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>stdio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>udp</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tcp</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>unix</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>qemu-vdagent</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>dbus</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </console>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </devices>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <features>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <gic supported='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <vmcoreinfo supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <genid supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <backingStoreInput supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <backup supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <async-teardown supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <ps2 supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <sev supported='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <sgx supported='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <hyperv supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='features'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>relaxed</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vapic</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>spinlocks</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vpindex</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>runtime</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>synic</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>stimer</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>reset</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vendor_id</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>frequencies</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>reenlightenment</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tlbflush</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>ipi</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>avic</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>emsr_bitmap</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>xmm_input</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <defaults>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <spinlocks>4095</spinlocks>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <stimer_direct>on</stimer_direct>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <tlbflush_direct>on</tlbflush_direct>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <tlbflush_extended>on</tlbflush_extended>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </defaults>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </hyperv>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <launchSecurity supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='sectype'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tdx</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </launchSecurity>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </features>
Dec 07 10:02:03 compute-1 nova_compute[229446]: </domainCapabilities>
Dec 07 10:02:03 compute-1 nova_compute[229446]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 07 10:02:03 compute-1 nova_compute[229446]: 2025-12-07 10:02:03.842 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 07 10:02:03 compute-1 nova_compute[229446]: 2025-12-07 10:02:03.845 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 07 10:02:03 compute-1 nova_compute[229446]: <domainCapabilities>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <path>/usr/libexec/qemu-kvm</path>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <domain>kvm</domain>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <arch>x86_64</arch>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <vcpu max='4096'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <iothreads supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <os supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <enum name='firmware'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>efi</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <loader supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>rom</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pflash</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='readonly'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>yes</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>no</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='secure'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>yes</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>no</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </loader>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </os>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <cpu>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='host-passthrough' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='hostPassthroughMigratable'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>on</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>off</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='maximum' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='maximumMigratable'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>on</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>off</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='host-model' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <vendor>AMD</vendor>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='x2apic'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='tsc-deadline'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='hypervisor'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='tsc_adjust'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='spec-ctrl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='stibp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='cmp_legacy'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='overflow-recov'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='succor'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='ibrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='amd-ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='virt-ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='lbrv'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='tsc-scale'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='vmcb-clean'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='flushbyasid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='pause-filter'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='pfthreshold'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='svme-addr-chk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='disable' name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='custom' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cooperlake'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cooperlake-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cooperlake-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Dhyana-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Genoa'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amd-psfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='auto-ibrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='stibp-always-on'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Genoa-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amd-psfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='auto-ibrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='stibp-always-on'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Milan'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Milan-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Milan-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amd-psfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='stibp-always-on'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='GraniteRapids'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='prefetchiti'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='GraniteRapids-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='prefetchiti'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='GraniteRapids-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10-128'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10-256'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10-512'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='prefetchiti'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v6'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v7'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='IvyBridge'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='IvyBridge-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='IvyBridge-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='IvyBridge-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='KnightsMill'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512er'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512pf'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='KnightsMill-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512er'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512pf'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Opteron_G4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Opteron_G4-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Opteron_G5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tbm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Opteron_G5-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tbm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SierraForest'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cmpccxadd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='SierraForest-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cmpccxadd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='athlon'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='athlon-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='core2duo'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='core2duo-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='coreduo'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='coreduo-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='n270'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='n270-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='phenom'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='phenom-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </cpu>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <memoryBacking supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <enum name='sourceType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>file</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>anonymous</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>memfd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </memoryBacking>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <devices>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <disk supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='diskDevice'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>disk</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>cdrom</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>floppy</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>lun</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='bus'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>fdc</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>scsi</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>usb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>sata</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio-transitional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio-non-transitional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </disk>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <graphics supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vnc</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>egl-headless</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>dbus</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </graphics>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <video supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='modelType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vga</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>cirrus</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>none</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>bochs</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>ramfb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </video>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <hostdev supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='mode'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>subsystem</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='startupPolicy'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>default</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>mandatory</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>requisite</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>optional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='subsysType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>usb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pci</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>scsi</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='capsType'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='pciBackend'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </hostdev>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <rng supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio-transitional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtio-non-transitional</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendModel'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>random</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>egd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>builtin</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </rng>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <filesystem supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='driverType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>path</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>handle</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>virtiofs</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </filesystem>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <tpm supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tpm-tis</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tpm-crb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendModel'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>emulator</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>external</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendVersion'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>2.0</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </tpm>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <redirdev supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='bus'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>usb</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </redirdev>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <channel supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pty</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>unix</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </channel>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <crypto supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>qemu</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendModel'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>builtin</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </crypto>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <interface supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='backendType'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>default</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>passt</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </interface>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <panic supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>isa</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>hyperv</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </panic>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <console supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>null</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vc</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pty</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>dev</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>file</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pipe</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>stdio</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>udp</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tcp</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>unix</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>qemu-vdagent</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>dbus</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </console>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </devices>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <features>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <gic supported='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <vmcoreinfo supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <genid supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <backingStoreInput supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <backup supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <async-teardown supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <ps2 supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <sev supported='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <sgx supported='no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <hyperv supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='features'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>relaxed</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vapic</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>spinlocks</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vpindex</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>runtime</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>synic</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>stimer</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>reset</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>vendor_id</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>frequencies</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>reenlightenment</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tlbflush</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>ipi</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>avic</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>emsr_bitmap</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>xmm_input</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <defaults>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <spinlocks>4095</spinlocks>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <stimer_direct>on</stimer_direct>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <tlbflush_direct>on</tlbflush_direct>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <tlbflush_extended>on</tlbflush_extended>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </defaults>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </hyperv>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <launchSecurity supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='sectype'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>tdx</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </launchSecurity>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </features>
Dec 07 10:02:03 compute-1 nova_compute[229446]: </domainCapabilities>
Dec 07 10:02:03 compute-1 nova_compute[229446]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 07 10:02:03 compute-1 nova_compute[229446]: 2025-12-07 10:02:03.903 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 07 10:02:03 compute-1 nova_compute[229446]: <domainCapabilities>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <path>/usr/libexec/qemu-kvm</path>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <domain>kvm</domain>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <arch>x86_64</arch>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <vcpu max='240'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <iothreads supported='yes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <os supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <enum name='firmware'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <loader supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>rom</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>pflash</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='readonly'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>yes</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>no</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='secure'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>no</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </loader>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   </os>
Dec 07 10:02:03 compute-1 nova_compute[229446]:   <cpu>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='host-passthrough' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='hostPassthroughMigratable'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>on</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>off</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='maximum' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <enum name='maximumMigratable'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>on</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <value>off</value>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='host-model' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <vendor>AMD</vendor>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='x2apic'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='tsc-deadline'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='hypervisor'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='tsc_adjust'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='spec-ctrl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='stibp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='cmp_legacy'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='overflow-recov'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='succor'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='ibrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='amd-ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='virt-ssbd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='lbrv'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='tsc-scale'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='vmcb-clean'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='flushbyasid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='pause-filter'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='pfthreshold'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='svme-addr-chk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <feature policy='disable' name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:03 compute-1 nova_compute[229446]:     <mode name='custom' supported='yes'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Broadwell-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cascadelake-Server-v5'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cooperlake'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cooperlake-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Cooperlake-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Denverton-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Dhyana-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Genoa'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amd-psfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='auto-ibrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='stibp-always-on'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Genoa-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amd-psfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='auto-ibrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='stibp-always-on'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Milan'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Milan-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Milan-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amd-psfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='stibp-always-on'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-Rome-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='EPYC-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='GraniteRapids'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='prefetchiti'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='GraniteRapids-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='prefetchiti'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='GraniteRapids-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10-128'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10-256'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx10-512'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='prefetchiti'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v2'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v3'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Haswell-v4'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-noTSX'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 07 10:02:03 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v1'>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:03 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v2'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v3'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v4'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v5'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v6'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Icelake-Server-v7'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='IvyBridge'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='IvyBridge-IBRS'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='IvyBridge-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='IvyBridge-v2'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='KnightsMill'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512er'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512pf'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='KnightsMill-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512er'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512pf'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Opteron_G4'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Opteron_G4-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Opteron_G5'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='tbm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Opteron_G5-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fma4'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='tbm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xop'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids-v2'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='SapphireRapids-v3'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='amx-bf16'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='amx-int8'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='amx-tile'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-bf16'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-fp16'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bitalg'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512ifma'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrc'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fzrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='la57'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='taa-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xfd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='SierraForest'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx-ifma'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='cmpccxadd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='SierraForest-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx-ifma'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx-vnni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='cmpccxadd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fbsdp-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='fsrs'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ibrs-all'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='mcdt-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pbrsb-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='psdp-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='serialize'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vaes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-IBRS'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v2'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v3'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Client-v4'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-IBRS'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v2'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='hle'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='rtm'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v3'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v4'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Skylake-Server-v5'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512bw'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512cd'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512dq'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512f'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='avx512vl'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='invpcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pcid'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='pku'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Snowridge'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 07 10:02:04 compute-1 ceph-mon[80077]: pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='mpx'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v2'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v3'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='core-capability'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='split-lock-detect'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='Snowridge-v4'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='cldemote'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='erms'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='gfni'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='movdir64b'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='movdiri'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='xsaves'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='athlon'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='athlon-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='core2duo'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='core2duo-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='coreduo'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='coreduo-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='n270'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='n270-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='ss'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='phenom'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <blockers model='phenom-v1'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='3dnow'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <feature name='3dnowext'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </blockers>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </mode>
Dec 07 10:02:04 compute-1 nova_compute[229446]:   </cpu>
Dec 07 10:02:04 compute-1 nova_compute[229446]:   <memoryBacking supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <enum name='sourceType'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <value>file</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <value>anonymous</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <value>memfd</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:   </memoryBacking>
Dec 07 10:02:04 compute-1 nova_compute[229446]:   <devices>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <disk supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='diskDevice'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>disk</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>cdrom</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>floppy</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>lun</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='bus'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>ide</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>fdc</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>scsi</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>usb</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>sata</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>virtio-transitional</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>virtio-non-transitional</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </disk>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <graphics supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>vnc</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>egl-headless</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>dbus</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </graphics>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <video supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='modelType'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>vga</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>cirrus</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>none</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>bochs</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>ramfb</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </video>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <hostdev supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='mode'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>subsystem</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='startupPolicy'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>default</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>mandatory</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>requisite</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>optional</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='subsysType'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>usb</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>pci</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>scsi</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='capsType'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='pciBackend'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </hostdev>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <rng supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>virtio</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>virtio-transitional</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>virtio-non-transitional</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='backendModel'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>random</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>egd</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>builtin</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </rng>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <filesystem supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='driverType'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>path</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>handle</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>virtiofs</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </filesystem>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <tpm supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>tpm-tis</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>tpm-crb</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='backendModel'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>emulator</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>external</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='backendVersion'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>2.0</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </tpm>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <redirdev supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='bus'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>usb</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </redirdev>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <channel supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>pty</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>unix</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </channel>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <crypto supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='model'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>qemu</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='backendModel'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>builtin</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </crypto>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <interface supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='backendType'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>default</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>passt</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </interface>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <panic supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='model'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>isa</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>hyperv</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </panic>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <console supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='type'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>null</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>vc</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>pty</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>dev</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>file</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>pipe</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>stdio</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>udp</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>tcp</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>unix</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>qemu-vdagent</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>dbus</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </console>
Dec 07 10:02:04 compute-1 nova_compute[229446]:   </devices>
Dec 07 10:02:04 compute-1 nova_compute[229446]:   <features>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <gic supported='no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <vmcoreinfo supported='yes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <genid supported='yes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <backingStoreInput supported='yes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <backup supported='yes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <async-teardown supported='yes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <ps2 supported='yes'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <sev supported='no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <sgx supported='no'/>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <hyperv supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='features'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>relaxed</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>vapic</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>spinlocks</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>vpindex</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>runtime</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>synic</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>stimer</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>reset</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>vendor_id</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>frequencies</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>reenlightenment</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>tlbflush</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>ipi</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>avic</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>emsr_bitmap</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>xmm_input</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <defaults>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <spinlocks>4095</spinlocks>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <stimer_direct>on</stimer_direct>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <tlbflush_direct>on</tlbflush_direct>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <tlbflush_extended>on</tlbflush_extended>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </defaults>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </hyperv>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     <launchSecurity supported='yes'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       <enum name='sectype'>
Dec 07 10:02:04 compute-1 nova_compute[229446]:         <value>tdx</value>
Dec 07 10:02:04 compute-1 nova_compute[229446]:       </enum>
Dec 07 10:02:04 compute-1 nova_compute[229446]:     </launchSecurity>
Dec 07 10:02:04 compute-1 nova_compute[229446]:   </features>
Dec 07 10:02:04 compute-1 nova_compute[229446]: </domainCapabilities>
Dec 07 10:02:04 compute-1 nova_compute[229446]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:03.966 229450 DEBUG nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:03.966 229450 INFO nova.virt.libvirt.host [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Secure Boot support detected
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:03.968 229450 INFO nova.virt.libvirt.driver [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:03.968 229450 INFO nova.virt.libvirt.driver [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:03.976 229450 DEBUG nova.virt.libvirt.driver [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:04.026 229450 INFO nova.virt.node [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Determined node identity 58b51610-0751-43d9-94a3-66540bffec81 from /var/lib/nova/compute_id
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:04.053 229450 WARNING nova.compute.manager [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Compute nodes ['58b51610-0751-43d9-94a3-66540bffec81'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 07 10:02:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:04 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9610004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:04.147 229450 INFO nova.compute.manager [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:04.379 229450 WARNING nova.compute.manager [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:04.379 229450 DEBUG oslo_concurrency.lockutils [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:04.379 229450 DEBUG oslo_concurrency.lockutils [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:04.379 229450 DEBUG oslo_concurrency.lockutils [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:04.379 229450 DEBUG nova.compute.resource_tracker [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:04.380 229450 DEBUG oslo_concurrency.processutils [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:02:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:04.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:04 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:02:04 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1047187675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:02:04 compute-1 nova_compute[229446]: 2025-12-07 10:02:04.827 229450 DEBUG oslo_concurrency.processutils [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:02:04 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Dec 07 10:02:04 compute-1 systemd[1]: Started libvirt nodedev daemon.
Dec 07 10:02:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:04 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9608003720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:04 compute-1 sudo[230197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byqojvcfvxksqnisqejqdhgjpkpztynl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101724.1569872-4334-280213625832342/AnsiballZ_podman_container.py'
Dec 07 10:02:04 compute-1 sudo[230197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:02:05 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1047187675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:02:05 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1207658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:02:05 compute-1 nova_compute[229446]: 2025-12-07 10:02:05.159 229450 WARNING nova.virt.libvirt.driver [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:02:05 compute-1 nova_compute[229446]: 2025-12-07 10:02:05.161 229450 DEBUG nova.compute.resource_tracker [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5236MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:02:05 compute-1 nova_compute[229446]: 2025-12-07 10:02:05.161 229450 DEBUG oslo_concurrency.lockutils [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:02:05 compute-1 nova_compute[229446]: 2025-12-07 10:02:05.162 229450 DEBUG oslo_concurrency.lockutils [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:02:05 compute-1 nova_compute[229446]: 2025-12-07 10:02:05.186 229450 WARNING nova.compute.resource_tracker [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] No compute node record for compute-1.ctlplane.example.com:58b51610-0751-43d9-94a3-66540bffec81: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 58b51610-0751-43d9-94a3-66540bffec81 could not be found.
Dec 07 10:02:05 compute-1 nova_compute[229446]: 2025-12-07 10:02:05.217 229450 INFO nova.compute.resource_tracker [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 58b51610-0751-43d9-94a3-66540bffec81
Dec 07 10:02:05 compute-1 python3.9[230199]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 07 10:02:05 compute-1 nova_compute[229446]: 2025-12-07 10:02:05.279 229450 DEBUG nova.compute.resource_tracker [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:02:05 compute-1 nova_compute[229446]: 2025-12-07 10:02:05.280 229450 DEBUG nova.compute.resource_tracker [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:02:05 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 10:02:05 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 10:02:05 compute-1 sudo[230197]: pam_unix(sudo:session): session closed for user root
Dec 07 10:02:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:05 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9608003720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:05.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:02:06 compute-1 sudo[230383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uggzcukjuluarltdgmjwtvivtbugdbgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101725.6912718-4358-190099211446617/AnsiballZ_systemd.py'
Dec 07 10:02:06 compute-1 sudo[230383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:02:06 compute-1 ceph-mon[80077]: pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:02:06 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2797389742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:02:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:06 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9614003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:06 compute-1 podman[230347]: 2025-12-07 10:02:06.127322458 +0000 UTC m=+0.130445024 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 07 10:02:06 compute-1 python3.9[230392]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 07 10:02:06 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:02:06 compute-1 systemd[1]: Stopping nova_compute container...
Dec 07 10:02:06 compute-1 nova_compute[229446]: 2025-12-07 10:02:06.443 229450 DEBUG oslo_concurrency.lockutils [None req-fcaa2c27-ac4f-434f-ad0a-633a4cae8103 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:02:06 compute-1 nova_compute[229446]: 2025-12-07 10:02:06.444 229450 DEBUG oslo_concurrency.lockutils [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 07 10:02:06 compute-1 nova_compute[229446]: 2025-12-07 10:02:06.444 229450 DEBUG oslo_concurrency.lockutils [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 07 10:02:06 compute-1 nova_compute[229446]: 2025-12-07 10:02:06.444 229450 DEBUG oslo_concurrency.lockutils [None req-5662c899-821e-42a1-b546-856ceea97072 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 07 10:02:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:06.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:06 compute-1 virtqemud[229835]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 07 10:02:06 compute-1 virtqemud[229835]: hostname: compute-1
Dec 07 10:02:06 compute-1 virtqemud[229835]: End of file while reading data: Input/output error
Dec 07 10:02:06 compute-1 systemd[1]: libpod-cdd17e06b5d5370f42d2aebef8d8030ee5a261133461f88e1569c7a0392804a1.scope: Deactivated successfully.
Dec 07 10:02:06 compute-1 systemd[1]: libpod-cdd17e06b5d5370f42d2aebef8d8030ee5a261133461f88e1569c7a0392804a1.scope: Consumed 3.705s CPU time.
Dec 07 10:02:06 compute-1 conmon[229446]: conmon cdd17e06b5d5370f42d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cdd17e06b5d5370f42d2aebef8d8030ee5a261133461f88e1569c7a0392804a1.scope/container/memory.events
Dec 07 10:02:06 compute-1 podman[230406]: 2025-12-07 10:02:06.92331435 +0000 UTC m=+0.519897801 container died cdd17e06b5d5370f42d2aebef8d8030ee5a261133461f88e1569c7a0392804a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Dec 07 10:02:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:06 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9610004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:06 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cdd17e06b5d5370f42d2aebef8d8030ee5a261133461f88e1569c7a0392804a1-userdata-shm.mount: Deactivated successfully.
Dec 07 10:02:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-a579cbbcc9c2946533011ae3acd058b6bc3c818efb25663cc922f9e26c5c8dc3-merged.mount: Deactivated successfully.
Dec 07 10:02:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:07 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9610004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:07.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:02:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100207 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:02:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:08 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9608003720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:08.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:08 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9608003720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:09 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:09 compute-1 podman[230406]: 2025-12-07 10:02:09.538138397 +0000 UTC m=+3.134721838 container cleanup cdd17e06b5d5370f42d2aebef8d8030ee5a261133461f88e1569c7a0392804a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=nova_compute, io.buildah.version=1.41.3)
Dec 07 10:02:09 compute-1 podman[230406]: nova_compute
Dec 07 10:02:09 compute-1 ceph-mon[80077]: pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 10:02:09 compute-1 podman[230439]: nova_compute
Dec 07 10:02:09 compute-1 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 07 10:02:09 compute-1 systemd[1]: Stopped nova_compute container.
Dec 07 10:02:09 compute-1 systemd[1]: Starting nova_compute container...
Dec 07 10:02:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:09.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:09 compute-1 podman[230452]: 2025-12-07 10:02:09.696310462 +0000 UTC m=+0.060512574 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Dec 07 10:02:09 compute-1 systemd[1]: Started libcrun container.
Dec 07 10:02:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a579cbbcc9c2946533011ae3acd058b6bc3c818efb25663cc922f9e26c5c8dc3/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 07 10:02:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a579cbbcc9c2946533011ae3acd058b6bc3c818efb25663cc922f9e26c5c8dc3/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 07 10:02:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a579cbbcc9c2946533011ae3acd058b6bc3c818efb25663cc922f9e26c5c8dc3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 07 10:02:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a579cbbcc9c2946533011ae3acd058b6bc3c818efb25663cc922f9e26c5c8dc3/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 07 10:02:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a579cbbcc9c2946533011ae3acd058b6bc3c818efb25663cc922f9e26c5c8dc3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 07 10:02:10 compute-1 podman[230453]: 2025-12-07 10:02:10.007225401 +0000 UTC m=+0.365659784 container init cdd17e06b5d5370f42d2aebef8d8030ee5a261133461f88e1569c7a0392804a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 07 10:02:10 compute-1 podman[230453]: 2025-12-07 10:02:10.015283261 +0000 UTC m=+0.373717664 container start cdd17e06b5d5370f42d2aebef8d8030ee5a261133461f88e1569c7a0392804a1 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 07 10:02:10 compute-1 nova_compute[230488]: + sudo -E kolla_set_configs
Dec 07 10:02:10 compute-1 podman[230453]: nova_compute
Dec 07 10:02:10 compute-1 systemd[1]: Started nova_compute container.
Dec 07 10:02:10 compute-1 sudo[230383]: pam_unix(sudo:session): session closed for user root
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Validating config file
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Copying service configuration files
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 07 10:02:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:10 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9610004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Deleting /etc/ceph
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Creating directory /etc/ceph
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /etc/ceph
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Writing out command to execute
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 07 10:02:10 compute-1 nova_compute[230488]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 07 10:02:10 compute-1 nova_compute[230488]: ++ cat /run_command
Dec 07 10:02:10 compute-1 nova_compute[230488]: + CMD=nova-compute
Dec 07 10:02:10 compute-1 nova_compute[230488]: + ARGS=
Dec 07 10:02:10 compute-1 nova_compute[230488]: + sudo kolla_copy_cacerts
Dec 07 10:02:10 compute-1 nova_compute[230488]: + [[ ! -n '' ]]
Dec 07 10:02:10 compute-1 nova_compute[230488]: + . kolla_extend_start
Dec 07 10:02:10 compute-1 nova_compute[230488]: Running command: 'nova-compute'
Dec 07 10:02:10 compute-1 nova_compute[230488]: + echo 'Running command: '\''nova-compute'\'''
Dec 07 10:02:10 compute-1 nova_compute[230488]: + umask 0022
Dec 07 10:02:10 compute-1 nova_compute[230488]: + exec nova-compute
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.333826) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101730333859, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1075, "num_deletes": 251, "total_data_size": 2587823, "memory_usage": 2626672, "flush_reason": "Manual Compaction"}
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101730352706, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1690193, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19982, "largest_seqno": 21052, "table_properties": {"data_size": 1685365, "index_size": 2416, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10438, "raw_average_key_size": 19, "raw_value_size": 1675758, "raw_average_value_size": 3161, "num_data_blocks": 107, "num_entries": 530, "num_filter_entries": 530, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765101642, "oldest_key_time": 1765101642, "file_creation_time": 1765101730, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 18928 microseconds, and 6127 cpu microseconds.
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.352752) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1690193 bytes OK
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.352770) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.354626) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.354754) EVENT_LOG_v1 {"time_micros": 1765101730354741, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.354790) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2582615, prev total WAL file size 2582615, number of live WAL files 2.
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.355771) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1650KB)], [36(13MB)]
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101730355814, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 15767037, "oldest_snapshot_seqno": -1}
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 5106 keys, 13582964 bytes, temperature: kUnknown
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101730449474, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 13582964, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13547506, "index_size": 21588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 130204, "raw_average_key_size": 25, "raw_value_size": 13453561, "raw_average_value_size": 2634, "num_data_blocks": 886, "num_entries": 5106, "num_filter_entries": 5106, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765101730, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.449856) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13582964 bytes
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.508376) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.1 rd, 144.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 13.4 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(17.4) write-amplify(8.0) OK, records in: 5624, records dropped: 518 output_compression: NoCompression
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.508438) EVENT_LOG_v1 {"time_micros": 1765101730508417, "job": 20, "event": "compaction_finished", "compaction_time_micros": 93775, "compaction_time_cpu_micros": 29310, "output_level": 6, "num_output_files": 1, "total_output_size": 13582964, "num_input_records": 5624, "num_output_records": 5106, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101730509081, "job": 20, "event": "table_file_deletion", "file_number": 38}
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101730511811, "job": 20, "event": "table_file_deletion", "file_number": 36}
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.355710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.511855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.511862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.511864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.511866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:02:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:02:10.511868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:02:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:10.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:10 compute-1 ceph-mon[80077]: pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 511 B/s wr, 1 op/s
Dec 07 10:02:10 compute-1 sudo[230650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybwmxcahjpdxwtxhrzxddushpgftwgny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765101730.3136141-4385-17482822980754/AnsiballZ_podman_container.py'
Dec 07 10:02:10 compute-1 sudo[230650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:02:10 compute-1 python3.9[230652]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 07 10:02:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:10 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f962c001e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:11 compute-1 systemd[1]: Started libpod-conmon-355252ff44d2290864a72f74aecfe53741fa03db42f41b32fff0149217c5d70c.scope.
Dec 07 10:02:11 compute-1 systemd[1]: Started libcrun container.
Dec 07 10:02:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51b85fe355627ffd2f3878f2b4b8f3ff66535e2247508482555a1c1d43b670e/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 07 10:02:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51b85fe355627ffd2f3878f2b4b8f3ff66535e2247508482555a1c1d43b670e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 07 10:02:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51b85fe355627ffd2f3878f2b4b8f3ff66535e2247508482555a1c1d43b670e/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 07 10:02:11 compute-1 podman[230677]: 2025-12-07 10:02:11.175050345 +0000 UTC m=+0.228457865 container init 355252ff44d2290864a72f74aecfe53741fa03db42f41b32fff0149217c5d70c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=edpm)
Dec 07 10:02:11 compute-1 podman[230677]: 2025-12-07 10:02:11.184150636 +0000 UTC m=+0.237558126 container start 355252ff44d2290864a72f74aecfe53741fa03db42f41b32fff0149217c5d70c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:02:11 compute-1 python3.9[230652]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Applying nova statedir ownership
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 07 10:02:11 compute-1 nova_compute_init[230695]: INFO:nova_statedir:Nova statedir ownership complete
Dec 07 10:02:11 compute-1 systemd[1]: libpod-355252ff44d2290864a72f74aecfe53741fa03db42f41b32fff0149217c5d70c.scope: Deactivated successfully.
Dec 07 10:02:11 compute-1 podman[230708]: 2025-12-07 10:02:11.289428297 +0000 UTC m=+0.026645972 container died 355252ff44d2290864a72f74aecfe53741fa03db42f41b32fff0149217c5d70c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, org.label-schema.schema-version=1.0, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125, config_id=edpm)
Dec 07 10:02:11 compute-1 sudo[230650]: pam_unix(sudo:session): session closed for user root
Dec 07 10:02:11 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:02:11 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-355252ff44d2290864a72f74aecfe53741fa03db42f41b32fff0149217c5d70c-userdata-shm.mount: Deactivated successfully.
Dec 07 10:02:11 compute-1 systemd[1]: var-lib-containers-storage-overlay-c51b85fe355627ffd2f3878f2b4b8f3ff66535e2247508482555a1c1d43b670e-merged.mount: Deactivated successfully.
Dec 07 10:02:11 compute-1 podman[230708]: 2025-12-07 10:02:11.364359455 +0000 UTC m=+0.101577100 container cleanup 355252ff44d2290864a72f74aecfe53741fa03db42f41b32fff0149217c5d70c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 07 10:02:11 compute-1 systemd[1]: libpod-conmon-355252ff44d2290864a72f74aecfe53741fa03db42f41b32fff0149217c5d70c.scope: Deactivated successfully.
Dec 07 10:02:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:11 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9608003720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:11 compute-1 ceph-mon[80077]: pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 10:02:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:11.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:11 compute-1 sshd-session[200700]: Connection closed by 192.168.122.30 port 55354
Dec 07 10:02:11 compute-1 sshd-session[200697]: pam_unix(sshd:session): session closed for user zuul
Dec 07 10:02:12 compute-1 systemd[1]: session-53.scope: Deactivated successfully.
Dec 07 10:02:12 compute-1 systemd[1]: session-53.scope: Consumed 2min 23.573s CPU time.
Dec 07 10:02:12 compute-1 systemd-logind[796]: Session 53 logged out. Waiting for processes to exit.
Dec 07 10:02:12 compute-1 systemd-logind[796]: Removed session 53.
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.100 230492 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.101 230492 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.101 230492 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.101 230492 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 07 10:02:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:12 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.235 230492 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.259 230492 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.259 230492 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 07 10:02:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:12.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:12 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4132046118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:02:12 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1959919275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:02:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.770 230492 INFO nova.virt.driver [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.881 230492 INFO nova.compute.provider_config [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.890 230492 DEBUG oslo_concurrency.lockutils [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.891 230492 DEBUG oslo_concurrency.lockutils [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.891 230492 DEBUG oslo_concurrency.lockutils [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.891 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.891 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.891 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.892 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.892 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.892 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.892 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.892 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.892 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.892 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.892 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.893 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.893 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.893 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.893 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.893 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.894 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.894 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.894 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.894 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.894 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.894 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.894 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.895 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.895 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.895 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.895 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.895 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.895 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.896 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.896 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.896 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.896 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.896 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.896 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.897 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.898 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.898 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.898 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.899 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.899 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.899 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.900 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.900 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.900 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.900 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.900 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.901 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.901 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.901 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.901 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.901 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.901 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.902 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.902 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.902 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.902 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.902 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.902 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.903 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.903 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.903 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.903 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.903 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.903 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.903 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.904 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.904 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.904 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.904 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.904 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.904 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.905 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.905 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.905 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.905 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.905 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.905 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.906 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.906 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.906 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.906 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.906 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.906 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.906 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.907 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.907 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.907 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.907 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.907 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.907 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.908 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.908 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.908 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.908 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.908 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.908 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.908 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.909 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.909 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.909 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.909 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.909 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.909 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.909 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.910 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.910 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.910 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.910 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.910 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.910 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.910 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.911 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.911 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.911 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.911 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.911 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.911 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.911 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.911 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.912 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.912 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.912 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.912 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.912 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.912 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.912 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.913 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.913 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.913 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.913 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.913 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.913 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.913 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.914 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.914 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.914 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.914 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.914 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.914 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.914 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.915 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.915 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.915 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.915 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.915 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.915 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.915 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.916 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.916 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.916 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.916 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.916 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.916 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.917 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.917 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.917 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.917 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.917 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.917 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.918 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.918 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.918 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.918 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.918 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.918 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.918 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.919 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.919 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.919 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.919 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.919 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.919 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.919 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.920 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.920 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.920 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.920 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.920 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.920 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.921 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.921 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.921 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.921 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.921 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.921 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.921 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.922 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.922 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.922 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.922 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.922 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.922 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.922 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.923 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.923 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.923 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.923 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.923 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.923 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.924 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.924 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.924 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.924 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.924 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.924 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.924 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.925 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.925 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.925 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.925 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.925 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.925 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.925 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.926 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.926 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.926 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.926 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.926 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.926 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.926 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.927 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.927 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.927 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.927 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.927 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.927 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.928 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.928 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.928 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.928 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.928 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.928 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.928 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.929 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.929 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.929 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.929 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.929 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.929 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.930 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.930 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.930 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.930 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.930 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.930 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.930 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.931 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.931 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.931 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.931 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.931 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.931 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.931 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.932 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.932 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.932 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.932 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.932 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.932 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.932 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.933 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.933 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.933 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.933 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.933 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.933 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.934 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.934 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.934 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.934 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.934 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.934 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.934 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.935 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.935 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.935 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.935 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.935 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.935 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.935 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.936 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.936 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.936 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.936 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.936 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.937 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.937 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.937 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.937 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:12 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9610004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.937 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.938 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.938 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.938 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.938 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.938 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.938 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.938 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.939 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.939 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.939 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.939 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.939 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.939 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.939 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.940 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.940 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.940 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.940 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.940 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.940 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.941 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.941 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.941 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.941 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.941 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.941 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.941 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.942 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.942 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.942 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.942 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.942 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.942 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.942 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.943 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.943 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.943 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.943 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.943 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.943 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.944 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.944 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.944 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.944 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.944 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.944 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.944 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.945 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.945 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.945 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.945 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.945 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.945 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.946 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.946 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.946 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.946 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.946 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.946 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.946 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.947 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.947 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.947 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.948 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.948 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.948 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.948 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.949 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.949 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.949 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.949 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.949 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.949 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.950 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.950 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.950 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.950 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.950 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.950 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.950 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.951 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.951 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.951 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.951 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.951 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.951 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.951 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.952 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.952 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.952 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.952 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.952 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.952 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.952 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.953 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.953 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.953 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.953 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.953 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.953 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.954 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.954 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.954 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.954 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.954 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.954 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.954 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.955 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.955 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.955 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.955 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.955 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.955 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.955 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.956 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.956 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.956 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.956 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.956 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.956 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.957 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.957 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.957 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.957 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.957 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.957 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.957 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.958 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.958 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.958 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.958 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.958 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.958 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.958 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.959 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.959 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.959 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.959 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.959 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.959 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.959 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.960 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.960 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.960 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.960 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.960 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.960 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.960 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.961 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.961 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.961 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.961 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.961 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.961 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.961 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.962 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.962 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.962 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.962 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.962 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.962 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.962 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.963 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.963 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.963 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.963 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.963 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.963 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.963 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.964 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.964 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.964 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.964 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.964 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.964 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.965 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.965 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.965 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.965 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.965 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.965 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.965 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.966 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.966 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.966 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.966 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.966 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.966 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.966 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.967 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.967 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.967 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.967 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.967 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.967 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.967 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.968 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.968 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.968 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.968 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.968 230492 WARNING oslo_config.cfg [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 07 10:02:12 compute-1 nova_compute[230488]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 07 10:02:12 compute-1 nova_compute[230488]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 07 10:02:12 compute-1 nova_compute[230488]: and ``live_migration_inbound_addr`` respectively.
Dec 07 10:02:12 compute-1 nova_compute[230488]: ).  Its value may be silently ignored in the future.
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.969 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.969 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.969 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.969 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.970 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.970 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.970 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.970 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.970 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.970 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.971 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.971 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.971 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.971 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.971 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.971 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.972 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.972 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.972 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.rbd_secret_uuid        = 75f4c9fd-539a-5e17-b55a-0a12a4e2736c log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.972 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.972 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.972 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.973 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.973 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.973 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.973 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.973 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.973 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.973 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.974 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.974 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.974 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.974 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.974 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.974 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.975 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.975 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.975 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.975 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.975 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.976 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.976 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.976 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.976 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.976 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.976 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.976 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.977 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.977 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.977 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.977 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.977 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.977 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.978 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.978 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.978 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.978 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.978 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.978 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.978 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.979 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.979 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.979 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.979 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.979 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.979 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.980 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.980 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.980 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.980 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.980 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.980 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.980 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.981 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.981 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.981 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.981 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.981 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.981 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.981 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.982 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.982 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.982 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.982 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.982 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.983 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.983 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.983 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.983 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.983 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.983 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.983 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.984 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.984 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.984 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.984 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.984 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.984 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.984 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.985 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.985 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.985 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.985 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.985 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.985 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.985 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.986 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.986 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.986 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.986 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.986 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.986 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.987 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.987 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.987 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.987 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.987 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.987 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.988 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.988 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.988 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.988 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.988 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.988 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.989 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.989 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.989 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.989 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.989 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.989 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.989 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.990 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.990 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.990 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.990 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.990 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.991 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.991 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.991 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.991 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.991 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.992 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.992 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.992 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.992 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.992 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.992 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.992 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.993 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.993 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.993 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.993 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.993 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.993 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.994 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.994 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.994 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.994 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.994 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.995 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.995 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.995 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.995 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.995 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.995 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.995 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.996 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.996 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.996 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.996 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.996 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.996 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.996 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.997 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.997 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.997 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.997 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.997 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.997 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.998 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.998 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.998 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.998 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.998 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.998 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.999 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.999 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.999 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:12 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.999 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:12.999 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.000 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.000 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.000 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.000 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.000 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.000 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.001 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.001 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.001 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.001 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.001 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.002 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.002 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.002 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.002 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.002 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.002 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.003 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.003 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.003 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.003 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.003 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.003 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.004 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.004 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.004 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.004 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.004 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.004 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.004 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.005 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.005 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.005 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.005 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.005 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.006 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.006 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.006 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.006 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.006 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.006 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.006 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.007 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.007 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.007 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.007 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.007 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.008 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.008 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.008 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.008 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.008 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.008 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.008 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.009 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.009 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.009 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.009 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.009 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.010 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.010 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.010 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.010 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.010 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.011 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.011 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.011 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.011 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.011 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.011 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.012 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.012 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.012 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.012 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.012 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.012 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.013 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.013 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.013 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.013 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.013 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.014 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.014 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.014 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.014 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.014 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.014 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.014 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.015 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.015 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.015 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.015 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.015 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.016 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.016 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.016 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.016 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.016 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.016 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.017 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.017 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.017 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.017 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.017 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.018 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.018 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.018 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.018 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.018 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.018 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.019 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.019 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.019 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.019 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.019 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.019 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.020 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.020 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.020 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.020 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.020 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.020 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.020 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.021 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.021 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.021 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.021 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.021 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.021 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.022 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.022 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.022 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.022 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.022 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.022 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.022 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.023 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.023 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.023 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.023 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.023 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.023 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.024 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.024 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.024 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.024 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.024 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.024 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.024 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.025 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.025 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.025 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.025 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.025 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.025 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.025 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.026 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.026 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.026 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.026 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.026 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.026 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.027 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.027 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.027 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.027 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.027 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.027 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.028 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.028 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.028 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.028 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.028 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.028 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.028 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.029 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.029 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.029 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.029 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.029 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.029 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.030 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.030 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.030 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.030 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.030 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.030 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.030 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.031 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.031 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.031 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.031 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.031 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.031 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.032 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.032 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.032 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.032 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.032 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.032 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.032 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.033 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.033 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.033 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.033 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.033 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.033 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.034 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.034 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.034 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.034 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.034 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.034 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.034 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.035 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.035 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.035 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.035 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.035 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.035 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.036 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.036 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.036 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.036 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.036 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.036 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.036 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.037 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.037 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.037 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.037 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.037 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.037 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.037 230492 DEBUG oslo_service.service [None req-d492e186-107b-4531-b967-4bba7e2525ce - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.039 230492 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.058 230492 INFO nova.virt.node [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Determined node identity 58b51610-0751-43d9-94a3-66540bffec81 from /var/lib/nova/compute_id
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.059 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.059 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.060 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.060 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.078 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0ff29ba520> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.081 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0ff29ba520> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.082 230492 INFO nova.virt.libvirt.driver [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Connection event '1' reason 'None'
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.088 230492 INFO nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Libvirt host capabilities <capabilities>
Dec 07 10:02:13 compute-1 nova_compute[230488]: 
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <host>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <uuid>33deaee1-72d1-48f6-b57d-96104f1f436a</uuid>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <cpu>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <arch>x86_64</arch>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model>EPYC-Rome-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <vendor>AMD</vendor>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <microcode version='16777317'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <signature family='23' model='49' stepping='0'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='x2apic'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='tsc-deadline'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='osxsave'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='hypervisor'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='tsc_adjust'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='spec-ctrl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='stibp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='arch-capabilities'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='cmp_legacy'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='topoext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='virt-ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='lbrv'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='tsc-scale'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='vmcb-clean'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='pause-filter'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='pfthreshold'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='svme-addr-chk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='rdctl-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='skip-l1dfl-vmentry'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='mds-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature name='pschange-mc-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <pages unit='KiB' size='4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <pages unit='KiB' size='2048'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <pages unit='KiB' size='1048576'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </cpu>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <power_management>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <suspend_mem/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </power_management>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <iommu support='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <migration_features>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <live/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <uri_transports>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <uri_transport>tcp</uri_transport>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <uri_transport>rdma</uri_transport>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </uri_transports>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </migration_features>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <topology>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <cells num='1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <cell id='0'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:           <memory unit='KiB'>7864316</memory>
Dec 07 10:02:13 compute-1 nova_compute[230488]:           <pages unit='KiB' size='4'>1966079</pages>
Dec 07 10:02:13 compute-1 nova_compute[230488]:           <pages unit='KiB' size='2048'>0</pages>
Dec 07 10:02:13 compute-1 nova_compute[230488]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 07 10:02:13 compute-1 nova_compute[230488]:           <distances>
Dec 07 10:02:13 compute-1 nova_compute[230488]:             <sibling id='0' value='10'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:           </distances>
Dec 07 10:02:13 compute-1 nova_compute[230488]:           <cpus num='8'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:           </cpus>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         </cell>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </cells>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </topology>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <cache>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </cache>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <secmodel>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model>selinux</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <doi>0</doi>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </secmodel>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <secmodel>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model>dac</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <doi>0</doi>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </secmodel>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </host>
Dec 07 10:02:13 compute-1 nova_compute[230488]: 
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <guest>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <os_type>hvm</os_type>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <arch name='i686'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <wordsize>32</wordsize>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <domain type='qemu'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <domain type='kvm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </arch>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <features>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <pae/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <nonpae/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <acpi default='on' toggle='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <apic default='on' toggle='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <cpuselection/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <deviceboot/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <disksnapshot default='on' toggle='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <externalSnapshot/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </features>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </guest>
Dec 07 10:02:13 compute-1 nova_compute[230488]: 
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <guest>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <os_type>hvm</os_type>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <arch name='x86_64'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <wordsize>64</wordsize>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <domain type='qemu'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <domain type='kvm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </arch>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <features>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <acpi default='on' toggle='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <apic default='on' toggle='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <cpuselection/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <deviceboot/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <disksnapshot default='on' toggle='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <externalSnapshot/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </features>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </guest>
Dec 07 10:02:13 compute-1 nova_compute[230488]: 
Dec 07 10:02:13 compute-1 nova_compute[230488]: </capabilities>
Dec 07 10:02:13 compute-1 nova_compute[230488]: 
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.094 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.096 230492 DEBUG nova.virt.libvirt.volume.mount [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.099 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 07 10:02:13 compute-1 nova_compute[230488]: <domainCapabilities>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <path>/usr/libexec/qemu-kvm</path>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <domain>kvm</domain>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <arch>i686</arch>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <vcpu max='240'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <iothreads supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <os supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <enum name='firmware'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <loader supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>rom</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pflash</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='readonly'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>yes</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>no</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='secure'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>no</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </loader>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </os>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <cpu>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='host-passthrough' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='hostPassthroughMigratable'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>on</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>off</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='maximum' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='maximumMigratable'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>on</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>off</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='host-model' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <vendor>AMD</vendor>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='x2apic'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='tsc-deadline'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='hypervisor'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='tsc_adjust'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='spec-ctrl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='stibp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='cmp_legacy'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='overflow-recov'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='succor'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='ibrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='amd-ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='virt-ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='lbrv'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='tsc-scale'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='vmcb-clean'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='flushbyasid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='pause-filter'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='pfthreshold'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='svme-addr-chk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='disable' name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='custom' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cooperlake'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cooperlake-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cooperlake-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Dhyana-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Genoa'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amd-psfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='auto-ibrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='stibp-always-on'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Genoa-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amd-psfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='auto-ibrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='stibp-always-on'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Milan'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Milan-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Milan-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amd-psfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='stibp-always-on'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='GraniteRapids'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='prefetchiti'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='GraniteRapids-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='prefetchiti'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='GraniteRapids-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10-128'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10-256'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10-512'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='prefetchiti'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v6'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v7'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='KnightsMill'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512er'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512pf'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='KnightsMill-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512er'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512pf'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G4-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tbm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G5-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tbm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SierraForest'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cmpccxadd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SierraForest-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cmpccxadd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='athlon'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='athlon-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='core2duo'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='core2duo-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='coreduo'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='coreduo-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='n270'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='n270-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='phenom'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='phenom-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </cpu>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <memoryBacking supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <enum name='sourceType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>file</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>anonymous</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>memfd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </memoryBacking>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <devices>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <disk supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='diskDevice'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>disk</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>cdrom</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>floppy</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>lun</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='bus'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>ide</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>fdc</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>scsi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>usb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>sata</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-non-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </disk>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <graphics supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vnc</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>egl-headless</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>dbus</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </graphics>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <video supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='modelType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vga</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>cirrus</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>none</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>bochs</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>ramfb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </video>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <hostdev supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='mode'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>subsystem</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='startupPolicy'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>default</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>mandatory</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>requisite</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>optional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='subsysType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>usb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pci</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>scsi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='capsType'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='pciBackend'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </hostdev>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <rng supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-non-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendModel'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>random</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>egd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>builtin</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </rng>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <filesystem supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='driverType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>path</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>handle</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtiofs</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </filesystem>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <tpm supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tpm-tis</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tpm-crb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendModel'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>emulator</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>external</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendVersion'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>2.0</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </tpm>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <redirdev supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='bus'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>usb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </redirdev>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <channel supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pty</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>unix</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </channel>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <crypto supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>qemu</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendModel'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>builtin</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </crypto>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <interface supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>default</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>passt</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </interface>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <panic supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>isa</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>hyperv</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </panic>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <console supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>null</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vc</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pty</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>dev</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>file</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pipe</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>stdio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>udp</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tcp</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>unix</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>qemu-vdagent</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>dbus</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </console>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </devices>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <features>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <gic supported='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <vmcoreinfo supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <genid supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <backingStoreInput supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <backup supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <async-teardown supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <ps2 supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <sev supported='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <sgx supported='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <hyperv supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='features'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>relaxed</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vapic</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>spinlocks</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vpindex</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>runtime</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>synic</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>stimer</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>reset</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vendor_id</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>frequencies</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>reenlightenment</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tlbflush</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>ipi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>avic</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>emsr_bitmap</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>xmm_input</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <defaults>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <spinlocks>4095</spinlocks>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <stimer_direct>on</stimer_direct>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <tlbflush_direct>on</tlbflush_direct>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <tlbflush_extended>on</tlbflush_extended>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </defaults>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </hyperv>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <launchSecurity supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='sectype'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tdx</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </launchSecurity>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </features>
Dec 07 10:02:13 compute-1 nova_compute[230488]: </domainCapabilities>
Dec 07 10:02:13 compute-1 nova_compute[230488]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.104 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 07 10:02:13 compute-1 nova_compute[230488]: <domainCapabilities>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <path>/usr/libexec/qemu-kvm</path>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <domain>kvm</domain>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <arch>i686</arch>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <vcpu max='4096'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <iothreads supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <os supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <enum name='firmware'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <loader supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>rom</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pflash</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='readonly'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>yes</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>no</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='secure'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>no</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </loader>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </os>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <cpu>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='host-passthrough' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='hostPassthroughMigratable'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>on</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>off</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='maximum' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='maximumMigratable'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>on</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>off</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='host-model' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <vendor>AMD</vendor>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='x2apic'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='tsc-deadline'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='hypervisor'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='tsc_adjust'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='spec-ctrl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='stibp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='cmp_legacy'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='overflow-recov'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='succor'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='ibrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='amd-ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='virt-ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='lbrv'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='tsc-scale'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='vmcb-clean'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='flushbyasid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='pause-filter'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='pfthreshold'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='svme-addr-chk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='disable' name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='custom' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cooperlake'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cooperlake-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cooperlake-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Dhyana-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Genoa'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amd-psfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='auto-ibrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='stibp-always-on'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Genoa-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amd-psfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='auto-ibrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='stibp-always-on'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Milan'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Milan-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Milan-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amd-psfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='stibp-always-on'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='GraniteRapids'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='prefetchiti'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='GraniteRapids-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='prefetchiti'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='GraniteRapids-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10-128'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10-256'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10-512'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='prefetchiti'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v6'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v7'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='KnightsMill'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512er'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512pf'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='KnightsMill-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512er'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512pf'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G4-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tbm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G5-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tbm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SierraForest'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cmpccxadd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SierraForest-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cmpccxadd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='athlon'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='athlon-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='core2duo'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='core2duo-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='coreduo'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='coreduo-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='n270'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='n270-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='phenom'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='phenom-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </cpu>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <memoryBacking supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <enum name='sourceType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>file</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>anonymous</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>memfd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </memoryBacking>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <devices>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <disk supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='diskDevice'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>disk</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>cdrom</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>floppy</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>lun</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='bus'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>fdc</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>scsi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>usb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>sata</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-non-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </disk>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <graphics supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vnc</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>egl-headless</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>dbus</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </graphics>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <video supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='modelType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vga</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>cirrus</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>none</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>bochs</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>ramfb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </video>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <hostdev supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='mode'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>subsystem</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='startupPolicy'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>default</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>mandatory</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>requisite</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>optional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='subsysType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>usb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pci</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>scsi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='capsType'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='pciBackend'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </hostdev>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <rng supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-non-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendModel'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>random</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>egd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>builtin</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </rng>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <filesystem supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='driverType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>path</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>handle</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtiofs</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </filesystem>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <tpm supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tpm-tis</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tpm-crb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendModel'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>emulator</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>external</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendVersion'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>2.0</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </tpm>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <redirdev supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='bus'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>usb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </redirdev>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <channel supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pty</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>unix</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </channel>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <crypto supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>qemu</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendModel'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>builtin</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </crypto>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <interface supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>default</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>passt</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </interface>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <panic supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>isa</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>hyperv</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </panic>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <console supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>null</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vc</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pty</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>dev</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>file</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pipe</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>stdio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>udp</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tcp</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>unix</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>qemu-vdagent</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>dbus</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </console>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </devices>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <features>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <gic supported='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <vmcoreinfo supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <genid supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <backingStoreInput supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <backup supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <async-teardown supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <ps2 supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <sev supported='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <sgx supported='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <hyperv supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='features'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>relaxed</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vapic</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>spinlocks</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vpindex</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>runtime</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>synic</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>stimer</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>reset</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vendor_id</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>frequencies</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>reenlightenment</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tlbflush</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>ipi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>avic</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>emsr_bitmap</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>xmm_input</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <defaults>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <spinlocks>4095</spinlocks>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <stimer_direct>on</stimer_direct>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <tlbflush_direct>on</tlbflush_direct>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <tlbflush_extended>on</tlbflush_extended>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </defaults>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </hyperv>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <launchSecurity supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='sectype'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tdx</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </launchSecurity>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </features>
Dec 07 10:02:13 compute-1 nova_compute[230488]: </domainCapabilities>
Dec 07 10:02:13 compute-1 nova_compute[230488]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.136 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.140 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 07 10:02:13 compute-1 nova_compute[230488]: <domainCapabilities>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <path>/usr/libexec/qemu-kvm</path>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <domain>kvm</domain>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <arch>x86_64</arch>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <vcpu max='4096'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <iothreads supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <os supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <enum name='firmware'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>efi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <loader supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>rom</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pflash</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='readonly'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>yes</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>no</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='secure'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>yes</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>no</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </loader>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </os>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <cpu>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='host-passthrough' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='hostPassthroughMigratable'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>on</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>off</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='maximum' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='maximumMigratable'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>on</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>off</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='host-model' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <vendor>AMD</vendor>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='x2apic'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='tsc-deadline'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='hypervisor'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='tsc_adjust'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='spec-ctrl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='stibp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='cmp_legacy'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='overflow-recov'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='succor'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='ibrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='amd-ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='virt-ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='lbrv'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='tsc-scale'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='vmcb-clean'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='flushbyasid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='pause-filter'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='pfthreshold'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='svme-addr-chk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='disable' name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='custom' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cooperlake'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cooperlake-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cooperlake-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Dhyana-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Genoa'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amd-psfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='auto-ibrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='stibp-always-on'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Genoa-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amd-psfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='auto-ibrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='stibp-always-on'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Milan'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Milan-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Milan-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amd-psfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='stibp-always-on'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='GraniteRapids'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='prefetchiti'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='GraniteRapids-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='prefetchiti'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='GraniteRapids-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10-128'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10-256'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10-512'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='prefetchiti'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v6'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v7'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='KnightsMill'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512er'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512pf'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='KnightsMill-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512er'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512pf'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G4-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tbm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G5-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tbm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SierraForest'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cmpccxadd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SierraForest-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cmpccxadd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='athlon'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='athlon-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='core2duo'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='core2duo-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='coreduo'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='coreduo-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='n270'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='n270-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='phenom'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='phenom-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </cpu>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <memoryBacking supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <enum name='sourceType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>file</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>anonymous</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>memfd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </memoryBacking>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <devices>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <disk supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='diskDevice'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>disk</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>cdrom</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>floppy</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>lun</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='bus'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>fdc</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>scsi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>usb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>sata</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-non-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </disk>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <graphics supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vnc</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>egl-headless</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>dbus</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </graphics>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <video supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='modelType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vga</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>cirrus</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>none</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>bochs</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>ramfb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </video>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <hostdev supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='mode'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>subsystem</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='startupPolicy'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>default</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>mandatory</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>requisite</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>optional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='subsysType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>usb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pci</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>scsi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='capsType'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='pciBackend'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </hostdev>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <rng supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-non-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendModel'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>random</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>egd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>builtin</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </rng>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <filesystem supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='driverType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>path</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>handle</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtiofs</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </filesystem>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <tpm supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tpm-tis</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tpm-crb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendModel'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>emulator</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>external</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendVersion'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>2.0</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </tpm>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <redirdev supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='bus'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>usb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </redirdev>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <channel supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pty</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>unix</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </channel>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <crypto supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>qemu</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendModel'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>builtin</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </crypto>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <interface supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>default</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>passt</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </interface>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <panic supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>isa</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>hyperv</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </panic>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <console supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>null</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vc</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pty</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>dev</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>file</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pipe</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>stdio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>udp</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tcp</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>unix</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>qemu-vdagent</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>dbus</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </console>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </devices>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <features>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <gic supported='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <vmcoreinfo supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <genid supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <backingStoreInput supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <backup supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <async-teardown supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <ps2 supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <sev supported='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <sgx supported='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <hyperv supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='features'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>relaxed</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vapic</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>spinlocks</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vpindex</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>runtime</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>synic</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>stimer</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>reset</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vendor_id</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>frequencies</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>reenlightenment</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tlbflush</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>ipi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>avic</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>emsr_bitmap</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>xmm_input</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <defaults>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <spinlocks>4095</spinlocks>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <stimer_direct>on</stimer_direct>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <tlbflush_direct>on</tlbflush_direct>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <tlbflush_extended>on</tlbflush_extended>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </defaults>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </hyperv>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <launchSecurity supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='sectype'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tdx</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </launchSecurity>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </features>
Dec 07 10:02:13 compute-1 nova_compute[230488]: </domainCapabilities>
Dec 07 10:02:13 compute-1 nova_compute[230488]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.219 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 07 10:02:13 compute-1 nova_compute[230488]: <domainCapabilities>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <path>/usr/libexec/qemu-kvm</path>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <domain>kvm</domain>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <arch>x86_64</arch>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <vcpu max='240'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <iothreads supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <os supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <enum name='firmware'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <loader supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>rom</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pflash</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='readonly'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>yes</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>no</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='secure'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>no</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </loader>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </os>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <cpu>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='host-passthrough' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='hostPassthroughMigratable'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>on</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>off</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='maximum' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='maximumMigratable'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>on</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>off</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='host-model' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <vendor>AMD</vendor>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='x2apic'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='tsc-deadline'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='hypervisor'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='tsc_adjust'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='spec-ctrl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='stibp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='cmp_legacy'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='overflow-recov'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='succor'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='ibrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='amd-ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='virt-ssbd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='lbrv'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='tsc-scale'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='vmcb-clean'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='flushbyasid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='pause-filter'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='pfthreshold'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='svme-addr-chk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <feature policy='disable' name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <mode name='custom' supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Broadwell-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cascadelake-Server-v5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cooperlake'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cooperlake-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Cooperlake-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Denverton-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Dhyana-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Genoa'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amd-psfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='auto-ibrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='stibp-always-on'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Genoa-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amd-psfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='auto-ibrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='stibp-always-on'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Milan'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Milan-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Milan-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amd-psfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='no-nested-data-bp'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='null-sel-clr-base'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='stibp-always-on'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-Rome-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='EPYC-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='GraniteRapids'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='prefetchiti'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='GraniteRapids-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='prefetchiti'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='GraniteRapids-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10-128'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10-256'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx10-512'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='prefetchiti'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Haswell-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-noTSX'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v6'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Icelake-Server-v7'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='IvyBridge-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='KnightsMill'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512er'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512pf'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='KnightsMill-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4fmaps'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-4vnniw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512er'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512pf'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G4-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tbm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Opteron_G5-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fma4'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tbm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xop'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SapphireRapids-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='amx-tile'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-bf16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-fp16'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512-vpopcntdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bitalg'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vbmi2'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrc'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fzrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='la57'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='taa-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='tsx-ldtrk'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xfd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SierraForest'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cmpccxadd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='SierraForest-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ifma'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-ne-convert'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx-vnni-int8'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='bus-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cmpccxadd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fbsdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='fsrs'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ibrs-all'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mcdt-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pbrsb-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='psdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='sbdr-ssdp-no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='serialize'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vaes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='vpclmulqdq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Client-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='hle'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='rtm'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Skylake-Server-v5'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512bw'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512cd'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512dq'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512f'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='avx512vl'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='invpcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pcid'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='pku'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='mpx'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v2'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v3'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='core-capability'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='split-lock-detect'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='Snowridge-v4'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='cldemote'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='erms'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='gfni'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdir64b'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='movdiri'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='xsaves'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='athlon'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='athlon-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='core2duo'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='core2duo-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='coreduo'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='coreduo-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='n270'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='n270-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='ss'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='phenom'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <blockers model='phenom-v1'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnow'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <feature name='3dnowext'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </blockers>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </mode>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </cpu>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <memoryBacking supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <enum name='sourceType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>file</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>anonymous</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <value>memfd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </memoryBacking>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <devices>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <disk supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='diskDevice'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>disk</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>cdrom</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>floppy</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>lun</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='bus'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>ide</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>fdc</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>scsi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>usb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>sata</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-non-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </disk>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <graphics supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vnc</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>egl-headless</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>dbus</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </graphics>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <video supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='modelType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vga</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>cirrus</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>none</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>bochs</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>ramfb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </video>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <hostdev supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='mode'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>subsystem</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='startupPolicy'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>default</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>mandatory</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>requisite</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>optional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='subsysType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>usb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pci</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>scsi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='capsType'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='pciBackend'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </hostdev>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <rng supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtio-non-transitional</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendModel'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>random</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>egd</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>builtin</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </rng>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <filesystem supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='driverType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>path</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>handle</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>virtiofs</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </filesystem>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <tpm supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tpm-tis</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tpm-crb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendModel'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>emulator</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>external</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendVersion'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>2.0</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </tpm>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <redirdev supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='bus'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>usb</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </redirdev>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <channel supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pty</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>unix</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </channel>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <crypto supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>qemu</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendModel'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>builtin</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </crypto>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <interface supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='backendType'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>default</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>passt</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </interface>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <panic supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='model'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>isa</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>hyperv</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </panic>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <console supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='type'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>null</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vc</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pty</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>dev</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>file</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>pipe</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>stdio</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>udp</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tcp</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>unix</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>qemu-vdagent</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>dbus</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </console>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </devices>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   <features>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <gic supported='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <vmcoreinfo supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <genid supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <backingStoreInput supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <backup supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <async-teardown supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <ps2 supported='yes'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <sev supported='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <sgx supported='no'/>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <hyperv supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='features'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>relaxed</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vapic</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>spinlocks</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vpindex</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>runtime</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>synic</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>stimer</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>reset</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>vendor_id</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>frequencies</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>reenlightenment</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tlbflush</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>ipi</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>avic</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>emsr_bitmap</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>xmm_input</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <defaults>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <spinlocks>4095</spinlocks>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <stimer_direct>on</stimer_direct>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <tlbflush_direct>on</tlbflush_direct>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <tlbflush_extended>on</tlbflush_extended>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </defaults>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </hyperv>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     <launchSecurity supported='yes'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       <enum name='sectype'>
Dec 07 10:02:13 compute-1 nova_compute[230488]:         <value>tdx</value>
Dec 07 10:02:13 compute-1 nova_compute[230488]:       </enum>
Dec 07 10:02:13 compute-1 nova_compute[230488]:     </launchSecurity>
Dec 07 10:02:13 compute-1 nova_compute[230488]:   </features>
Dec 07 10:02:13 compute-1 nova_compute[230488]: </domainCapabilities>
Dec 07 10:02:13 compute-1 nova_compute[230488]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.291 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.291 230492 INFO nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Secure Boot support detected
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.294 230492 INFO nova.virt.libvirt.driver [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.294 230492 INFO nova.virt.libvirt.driver [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.303 230492 DEBUG nova.virt.libvirt.driver [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.344 230492 INFO nova.virt.node [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Determined node identity 58b51610-0751-43d9-94a3-66540bffec81 from /var/lib/nova/compute_id
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.372 230492 DEBUG nova.compute.manager [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Verified node 58b51610-0751-43d9-94a3-66540bffec81 matches my host compute-1.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.419 230492 INFO nova.compute.manager [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 07 10:02:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:13 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f962c001e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:13 compute-1 ceph-mon[80077]: pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 07 10:02:13 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/286348604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:02:13 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3475945244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.610 230492 DEBUG oslo_concurrency.lockutils [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.610 230492 DEBUG oslo_concurrency.lockutils [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.611 230492 DEBUG oslo_concurrency.lockutils [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.611 230492 DEBUG nova.compute.resource_tracker [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:02:13 compute-1 nova_compute[230488]: 2025-12-07 10:02:13.611 230492 DEBUG oslo_concurrency.processutils [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:02:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:13.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:13 compute-1 rsyslogd[1006]: imjournal from <np0005549475:nova_compute>: begin to drop messages due to rate-limiting
Dec 07 10:02:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:02:14 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3281624193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.056 230492 DEBUG oslo_concurrency.processutils [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:02:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:14 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9608003720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.243 230492 WARNING nova.virt.libvirt.driver [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.244 230492 DEBUG nova.compute.resource_tracker [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5231MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.245 230492 DEBUG oslo_concurrency.lockutils [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.245 230492 DEBUG oslo_concurrency.lockutils [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.472 230492 DEBUG nova.compute.resource_tracker [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.472 230492 DEBUG nova.compute.resource_tracker [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:02:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:14.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.563 230492 DEBUG nova.scheduler.client.report [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Refreshing inventories for resource provider 58b51610-0751-43d9-94a3-66540bffec81 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.578 230492 DEBUG nova.scheduler.client.report [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Updating ProviderTree inventory for provider 58b51610-0751-43d9-94a3-66540bffec81 from _refresh_and_get_inventory using data: {} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.578 230492 DEBUG nova.compute.provider_tree [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.595 230492 DEBUG nova.scheduler.client.report [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Refreshing aggregate associations for resource provider 58b51610-0751-43d9-94a3-66540bffec81, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 07 10:02:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3281624193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.614 230492 DEBUG nova.scheduler.client.report [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Refreshing trait associations for resource provider 58b51610-0751-43d9-94a3-66540bffec81, traits: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 07 10:02:14 compute-1 nova_compute[230488]: 2025-12-07 10:02:14.646 230492 DEBUG oslo_concurrency.processutils [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:02:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:14 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f96280021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:02:15 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3072786979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.128 230492 DEBUG oslo_concurrency.processutils [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.133 230492 DEBUG nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 07 10:02:15 compute-1 nova_compute[230488]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.133 230492 INFO nova.virt.libvirt.host [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] kernel doesn't support AMD SEV
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.134 230492 DEBUG nova.compute.provider_tree [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Updating inventory in ProviderTree for provider 58b51610-0751-43d9-94a3-66540bffec81 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.135 230492 DEBUG nova.virt.libvirt.driver [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.212 230492 DEBUG nova.scheduler.client.report [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Updated inventory for provider 58b51610-0751-43d9-94a3-66540bffec81 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.213 230492 DEBUG nova.compute.provider_tree [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Updating resource provider 58b51610-0751-43d9-94a3-66540bffec81 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.214 230492 DEBUG nova.compute.provider_tree [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Updating inventory in ProviderTree for provider 58b51610-0751-43d9-94a3-66540bffec81 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.329 230492 DEBUG nova.compute.provider_tree [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Updating resource provider 58b51610-0751-43d9-94a3-66540bffec81 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.372 230492 DEBUG nova.compute.resource_tracker [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.373 230492 DEBUG oslo_concurrency.lockutils [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.373 230492 DEBUG nova.service [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.450 230492 DEBUG nova.service [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 07 10:02:15 compute-1 nova_compute[230488]: 2025-12-07 10:02:15.451 230492 DEBUG nova.servicegroup.drivers.db [None req-46f2470e-57a4-43e0-ac4d-9a5c6971ef1a - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 07 10:02:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[224147]: 07/12/2025 10:02:15 : epoch 6935506d : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9610004440 fd 38 proxy ignored for local
Dec 07 10:02:15 compute-1 kernel: ganesha.nfsd[226520]: segfault at 50 ip 00007f96e169b32e sp 00007f969affc210 error 4 in libntirpc.so.5.8[7f96e1680000+2c000] likely on CPU 7 (core 0, socket 7)
Dec 07 10:02:15 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 10:02:15 compute-1 systemd[1]: Started Process Core Dump (PID 230827/UID 0).
Dec 07 10:02:15 compute-1 ceph-mon[80077]: pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:02:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3072786979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:02:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 10:02:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:15.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 10:02:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:02:16 compute-1 systemd-coredump[230828]: Process 224151 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 52:
                                                    #0  0x00007f96e169b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 10:02:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:16 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 10:02:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:16.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:02:16 compute-1 systemd[1]: systemd-coredump@10-230827-0.service: Deactivated successfully.
Dec 07 10:02:16 compute-1 systemd[1]: systemd-coredump@10-230827-0.service: Consumed 1.056s CPU time.
Dec 07 10:02:16 compute-1 podman[230835]: 2025-12-07 10:02:16.671457656 +0000 UTC m=+0.031660890 container died 4ca9ca4a7dc30b1e7d3b4bbd1d4f91f2eb46a55a076a7e2e910639b56372058d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 07 10:02:16 compute-1 systemd[1]: var-lib-containers-storage-overlay-938e354b0b02bc182524b41e35a3578a0c9f6143af19ee7b21802fbec34d8f5c-merged.mount: Deactivated successfully.
Dec 07 10:02:16 compute-1 podman[230835]: 2025-12-07 10:02:16.707608429 +0000 UTC m=+0.067811623 container remove 4ca9ca4a7dc30b1e7d3b4bbd1d4f91f2eb46a55a076a7e2e910639b56372058d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 10:02:16 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 10:02:16 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 10:02:16 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.536s CPU time.
Dec 07 10:02:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:17.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:18 compute-1 ceph-mon[80077]: pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:02:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:18.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:19 compute-1 podman[230879]: 2025-12-07 10:02:19.558641405 +0000 UTC m=+0.055908277 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 07 10:02:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:19.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:02:20 compute-1 ceph-mon[80077]: pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 170 B/s wr, 0 op/s
Dec 07 10:02:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:20.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:20 compute-1 sudo[230900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:02:20 compute-1 sudo[230900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:02:20 compute-1 sudo[230900]: pam_unix(sudo:session): session closed for user root
Dec 07 10:02:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:02:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100221 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:02:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:21.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:22 compute-1 ceph-mon[80077]: pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 10:02:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:22.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:02:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:23.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:24 compute-1 ceph-mon[80077]: pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 10:02:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:24.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:25.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:26 compute-1 ceph-mon[80077]: pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 07 10:02:26 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:02:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:26.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:02:26 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 11.
Dec 07 10:02:26 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:02:26 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.536s CPU time.
Dec 07 10:02:26 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 10:02:27 compute-1 podman[230979]: 2025-12-07 10:02:27.18511304 +0000 UTC m=+0.039061863 container create 090394ead9e67cfff725c14b60072308c0a2b73dba73a01de5e13ee510b5fc53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 07 10:02:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51d04e5265705477744435e79911de96dd1fce75b1b6ccc3032f7f0733914fc/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 10:02:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51d04e5265705477744435e79911de96dd1fce75b1b6ccc3032f7f0733914fc/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:02:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51d04e5265705477744435e79911de96dd1fce75b1b6ccc3032f7f0733914fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 10:02:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51d04e5265705477744435e79911de96dd1fce75b1b6ccc3032f7f0733914fc/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:02:27 compute-1 podman[230979]: 2025-12-07 10:02:27.241340075 +0000 UTC m=+0.095288898 container init 090394ead9e67cfff725c14b60072308c0a2b73dba73a01de5e13ee510b5fc53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 10:02:27 compute-1 podman[230979]: 2025-12-07 10:02:27.255768241 +0000 UTC m=+0.109717104 container start 090394ead9e67cfff725c14b60072308c0a2b73dba73a01de5e13ee510b5fc53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 10:02:27 compute-1 bash[230979]: 090394ead9e67cfff725c14b60072308c0a2b73dba73a01de5e13ee510b5fc53
Dec 07 10:02:27 compute-1 podman[230979]: 2025-12-07 10:02:27.170446207 +0000 UTC m=+0.024395050 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 10:02:27 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:02:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:27 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 10:02:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:27 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 10:02:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:27 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 10:02:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:27 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 10:02:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:27 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 10:02:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:27 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 10:02:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:27 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 10:02:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:27 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:02:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:27.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:28 compute-1 ceph-mon[80077]: pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Dec 07 10:02:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:02:28 compute-1 sshd-session[231037]: Invalid user server from 104.248.193.130 port 35214
Dec 07 10:02:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:28.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:28 compute-1 sshd-session[231037]: Connection closed by invalid user server 104.248.193.130 port 35214 [preauth]
Dec 07 10:02:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:29.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:30 compute-1 ceph-mon[80077]: pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Dec 07 10:02:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:30.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:02:31 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:02:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:31.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:32 compute-1 ceph-mon[80077]: pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Dec 07 10:02:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:32.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:02:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:33 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 07 10:02:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:33 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 07 10:02:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:33 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:02:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:33 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:02:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:33 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 07 10:02:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:33 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:02:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:33 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:02:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:33 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:02:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:33 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 07 10:02:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:33 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:02:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:33 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:02:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:33 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:02:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.002000055s ======
Dec 07 10:02:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:33.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Dec 07 10:02:34 compute-1 ceph-mon[80077]: pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Dec 07 10:02:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:34.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:35.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100235 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:02:36 compute-1 ceph-mon[80077]: pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 07 10:02:36 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:02:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:36.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:36 compute-1 podman[231044]: 2025-12-07 10:02:36.63515512 +0000 UTC m=+0.134649469 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 07 10:02:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:37.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:38 compute-1 ceph-mon[80077]: pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 07 10:02:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:38.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:02:38.634 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:02:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:02:38.635 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:02:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:02:38.635 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000001e:nfs.cephfs.0: -2
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 10:02:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:39 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:02:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:39.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:40 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe724000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:40 compute-1 ceph-mon[80077]: pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec 07 10:02:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:40.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:40 compute-1 podman[231087]: 2025-12-07 10:02:40.609051016 +0000 UTC m=+0.104919133 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 07 10:02:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 07 10:02:40 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1667258348' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:02:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 07 10:02:40 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1667258348' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:02:40 compute-1 sudo[231108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:02:40 compute-1 sudo[231108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:02:40 compute-1 sudo[231108]: pam_unix(sudo:session): session closed for user root
Dec 07 10:02:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 07 10:02:40 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2100801260' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:02:40 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 07 10:02:40 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2100801260' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:02:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:40 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7180014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:41 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:02:41 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/1667258348' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:02:41 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/1667258348' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:02:41 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2100801260' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:02:41 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2100801260' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:02:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:41 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe700000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:41.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:42 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6f8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:42 compute-1 ceph-mon[80077]: pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 07 10:02:42 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/246843581' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:02:42 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/246843581' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:02:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 10:02:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:42.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 10:02:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:42 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe704000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:02:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100243 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:02:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:43 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7180021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:43.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:02:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:44 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7000016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:44 compute-1 ceph-mon[80077]: pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:02:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:44.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:02:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:44 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6f80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:45 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe704001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:45.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:46 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7180021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:46 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:02:46 compute-1 ceph-mon[80077]: pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:02:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100246 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:02:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:46.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:46 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7000016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:47 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6f80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:47.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:48 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe704001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:48 compute-1 ceph-mon[80077]: pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec 07 10:02:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:48.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:48 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7180021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:49 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7000016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:49.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:50 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6f80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:50 compute-1 ceph-mon[80077]: pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Dec 07 10:02:50 compute-1 podman[231138]: 2025-12-07 10:02:50.543606494 +0000 UTC m=+0.047595058 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:02:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:50.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:50 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe704001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:02:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:51 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7180021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:51.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:02:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:52 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe700002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:52 compute-1 ceph-mon[80077]: pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Dec 07 10:02:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:52.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:02:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:52 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6f8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:53 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe704002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:53.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:54 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7180021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:54 compute-1 ceph-mon[80077]: pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 07 10:02:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:54.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:54 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe700002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:55 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:02:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:55 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6f8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:55.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:55 compute-1 sudo[231159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:02:55 compute-1 sudo[231159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:02:55 compute-1 sudo[231159]: pam_unix(sudo:session): session closed for user root
Dec 07 10:02:55 compute-1 sudo[231184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:02:55 compute-1 sudo[231184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:02:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:56 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe704002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:02:56 compute-1 ceph-mon[80077]: pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:02:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:56.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:56 compute-1 sudo[231184]: pam_unix(sudo:session): session closed for user root
Dec 07 10:02:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:56 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7180021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:02:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:02:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:02:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:02:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:02:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:02:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:02:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:02:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:57 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe700002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:57.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:58 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:02:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:58 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:02:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:58 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6f8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:58 compute-1 ceph-mon[80077]: pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:02:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:02:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:02:58.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:02:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:58 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe704002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:02:59 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7180021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:02:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:02:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:02:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:02:59.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:00 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe700003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:00 compute-1 nova_compute[230488]: 2025-12-07 10:03:00.453 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:03:00 compute-1 nova_compute[230488]: 2025-12-07 10:03:00.493 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:03:00 compute-1 ceph-mon[80077]: pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Dec 07 10:03:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:00.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:00 compute-1 sudo[231244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:03:00 compute-1 sudo[231244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:03:00 compute-1 sudo[231244]: pam_unix(sudo:session): session closed for user root
Dec 07 10:03:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:00 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6f8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:01 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:03:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:03:01 compute-1 sudo[231269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:03:01 compute-1 sudo[231269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:03:01 compute-1 sudo[231269]: pam_unix(sudo:session): session closed for user root
Dec 07 10:03:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:01 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe704004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:01.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:02 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7180021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:02 compute-1 ceph-mon[80077]: pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:03:02 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:03:02 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:03:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:02.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:02 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe700003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:03 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6f8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:03.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:04 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe704004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:04 compute-1 ceph-mon[80077]: pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:03:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:04.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:04 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7180021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:05 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe700003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:05.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:06 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe6f8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:06 compute-1 ceph-mon[80077]: pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:03:06 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:03:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100306 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:03:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:06.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:06 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe704004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:07 compute-1 kernel: ganesha.nfsd[231074]: segfault at 50 ip 00007fe7cfadc32e sp 00007fe799ffa210 error 4 in libntirpc.so.5.8[7fe7cfac1000+2c000] likely on CPU 5 (core 0, socket 5)
Dec 07 10:03:07 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 10:03:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[230995]: 07/12/2025 10:03:07 : epoch 693550b3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fe7180021d0 fd 39 proxy ignored for local
Dec 07 10:03:07 compute-1 systemd[1]: Started Process Core Dump (PID 231308/UID 0).
Dec 07 10:03:07 compute-1 podman[231297]: 2025-12-07 10:03:07.573161401 +0000 UTC m=+0.079876066 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 07 10:03:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:07.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:08 compute-1 ceph-mon[80077]: pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:03:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:08.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:08 compute-1 systemd-coredump[231315]: Process 230999 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007fe7cfadc32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 10:03:09 compute-1 systemd[1]: systemd-coredump@11-231308-0.service: Deactivated successfully.
Dec 07 10:03:09 compute-1 systemd[1]: systemd-coredump@11-231308-0.service: Consumed 1.444s CPU time.
Dec 07 10:03:09 compute-1 podman[231328]: 2025-12-07 10:03:09.111674496 +0000 UTC m=+0.022870239 container died 090394ead9e67cfff725c14b60072308c0a2b73dba73a01de5e13ee510b5fc53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 10:03:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-c51d04e5265705477744435e79911de96dd1fce75b1b6ccc3032f7f0733914fc-merged.mount: Deactivated successfully.
Dec 07 10:03:09 compute-1 podman[231328]: 2025-12-07 10:03:09.157067553 +0000 UTC m=+0.068263276 container remove 090394ead9e67cfff725c14b60072308c0a2b73dba73a01de5e13ee510b5fc53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec 07 10:03:09 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 10:03:09 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 10:03:09 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.279s CPU time.
Dec 07 10:03:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:09.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:10 compute-1 ceph-mon[80077]: pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:03:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:10.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:11 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1040818339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:03:11 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3695092401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:03:11 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:03:11 compute-1 podman[231372]: 2025-12-07 10:03:11.561394289 +0000 UTC m=+0.061624664 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 07 10:03:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:11.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.272 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.273 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.273 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.273 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.287 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.287 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.287 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.288 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.288 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.288 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.288 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.288 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.289 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.309 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.310 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.310 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.310 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.310 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:03:12 compute-1 ceph-mon[80077]: pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Dec 07 10:03:12 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3015393853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:03:12 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4007945682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:03:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:12.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:03:12 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2411862264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.801 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:03:12 compute-1 sshd-session[231413]: Invalid user hadoop from 104.248.193.130 port 44544
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.961 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.962 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5292MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.962 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:03:12 compute-1 nova_compute[230488]: 2025-12-07 10:03:12.963 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:03:12 compute-1 sshd-session[231413]: Connection closed by invalid user hadoop 104.248.193.130 port 44544 [preauth]
Dec 07 10:03:13 compute-1 nova_compute[230488]: 2025-12-07 10:03:13.048 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:03:13 compute-1 nova_compute[230488]: 2025-12-07 10:03:13.049 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:03:13 compute-1 nova_compute[230488]: 2025-12-07 10:03:13.072 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:03:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:03:13 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2411862264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:03:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100313 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:03:13 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:03:13 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2103768955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:03:13 compute-1 nova_compute[230488]: 2025-12-07 10:03:13.545 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:03:13 compute-1 nova_compute[230488]: 2025-12-07 10:03:13.553 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:03:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:13.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:13 compute-1 nova_compute[230488]: 2025-12-07 10:03:13.892 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:03:13 compute-1 nova_compute[230488]: 2025-12-07 10:03:13.894 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:03:13 compute-1 nova_compute[230488]: 2025-12-07 10:03:13.894 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:03:14 compute-1 ceph-mon[80077]: pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:03:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2103768955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:03:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:14.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:15.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:03:16 compute-1 ceph-mon[80077]: pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:03:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:16.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:17.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:18 compute-1 ceph-mon[80077]: pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:03:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:19 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 12.
Dec 07 10:03:19 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:03:19 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.279s CPU time.
Dec 07 10:03:19 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 10:03:19 compute-1 podman[231490]: 2025-12-07 10:03:19.736964124 +0000 UTC m=+0.043735762 container create dfb156753dafc9eecb891462320ff25341e0c9199d2d3177b5708e4f2e1c9884 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 10:03:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:19.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b1127be4fdf67157e5fe441f166f787520017706f0558ac3efcb4a62d78c19/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 10:03:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b1127be4fdf67157e5fe441f166f787520017706f0558ac3efcb4a62d78c19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 10:03:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b1127be4fdf67157e5fe441f166f787520017706f0558ac3efcb4a62d78c19/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:03:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b1127be4fdf67157e5fe441f166f787520017706f0558ac3efcb4a62d78c19/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:03:19 compute-1 podman[231490]: 2025-12-07 10:03:19.797333792 +0000 UTC m=+0.104105460 container init dfb156753dafc9eecb891462320ff25341e0c9199d2d3177b5708e4f2e1c9884 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 07 10:03:19 compute-1 podman[231490]: 2025-12-07 10:03:19.803235744 +0000 UTC m=+0.110007382 container start dfb156753dafc9eecb891462320ff25341e0c9199d2d3177b5708e4f2e1c9884 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:03:19 compute-1 bash[231490]: dfb156753dafc9eecb891462320ff25341e0c9199d2d3177b5708e4f2e1c9884
Dec 07 10:03:19 compute-1 podman[231490]: 2025-12-07 10:03:19.717260973 +0000 UTC m=+0.024032631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 10:03:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:19 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 10:03:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:19 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 10:03:19 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:03:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:19 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 10:03:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:19 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 10:03:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:19 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 10:03:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:19 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 10:03:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:19 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 10:03:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:19 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:03:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Dec 07 10:03:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Dec 07 10:03:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec 07 10:03:20 compute-1 ceph-mon[80077]: pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:03:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Dec 07 10:03:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Dec 07 10:03:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Dec 07 10:03:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Dec 07 10:03:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Dec 07 10:03:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Dec 07 10:03:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:20.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:21 compute-1 sudo[231549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:03:21 compute-1 sudo[231549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:03:21 compute-1 sudo[231549]: pam_unix(sudo:session): session closed for user root
Dec 07 10:03:21 compute-1 podman[231573]: 2025-12-07 10:03:21.082430528 +0000 UTC m=+0.056293567 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:03:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:03:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:21.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:22 compute-1 ceph-mon[80077]: pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:03:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:22.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:23.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:24 compute-1 ceph-mon[80077]: pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:03:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:24.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:25.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:25 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:03:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:25 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:03:26 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:03:26 compute-1 ceph-mon[80077]: pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 597 B/s wr, 166 op/s
Dec 07 10:03:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:26.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:03:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:27.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:28 compute-1 ceph-mon[80077]: pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 597 B/s wr, 166 op/s
Dec 07 10:03:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:28.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:29.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:30.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:30 compute-1 ceph-mon[80077]: pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 597 B/s wr, 166 op/s
Dec 07 10:03:31 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:03:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:31.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 10:03:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:31 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 10:03:32 compute-1 ceph-mon[80077]: pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 1023 B/s wr, 168 op/s
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:03:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:32 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd884000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:32 compute-1 rsyslogd[1006]: imjournal: 2052 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 07 10:03:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:32.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:33 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd878001d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:33 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd860000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:33.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:34 compute-1 ceph-mon[80077]: pgmap v632: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 1023 B/s wr, 168 op/s
Dec 07 10:03:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:34 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd884000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:34.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:35 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd864000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100335 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:03:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:35 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8780028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:35.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:36 compute-1 ceph-mon[80077]: pgmap v633: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 1023 B/s wr, 168 op/s
Dec 07 10:03:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:36 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8600016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:36 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:03:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100336 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:03:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:36.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:37 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd884000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:37 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd884000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 10:03:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:37.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 10:03:38 compute-1 ceph-mon[80077]: pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 10:03:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:38 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8780028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:38 compute-1 podman[231617]: 2025-12-07 10:03:38.589897271 +0000 UTC m=+0.089398046 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 07 10:03:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:03:38.636 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:03:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:03:38.636 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:03:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:03:38.636 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:03:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:38.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:39 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8600016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:39 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd884000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:39.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:40 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd864001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:40 compute-1 ceph-mon[80077]: pgmap v635: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 10:03:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:40.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:41 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8780028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:41 compute-1 sudo[231645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:03:41 compute-1 sudo[231645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:03:41 compute-1 sudo[231645]: pam_unix(sudo:session): session closed for user root
Dec 07 10:03:41 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:03:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:41 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8600016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:41 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 07 10:03:41 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/819428600' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 07 10:03:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:41.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:42 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8840095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:42 compute-1 ceph-mon[80077]: pgmap v636: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 10:03:42 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/819428600' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 07 10:03:42 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/3090531876' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 07 10:03:42 compute-1 podman[231671]: 2025-12-07 10:03:42.578938392 +0000 UTC m=+0.081355026 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 07 10:03:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:42.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:43 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd864002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:43 compute-1 ceph-mon[80077]: from='client.24592 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 07 10:03:43 compute-1 ceph-mon[80077]: from='client.24734 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 07 10:03:43 compute-1 ceph-mon[80077]: from='client.24734 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec 07 10:03:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:03:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:43 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8780028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:43.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:44 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd860002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:44 compute-1 ceph-mon[80077]: pgmap v637: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 07 10:03:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:44.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:45 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8840095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:45 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd864002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:45.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:46 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8780028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:46 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:03:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:46.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:47 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd860002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:47 compute-1 ceph-mon[80077]: pgmap v638: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 1 op/s
Dec 07 10:03:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:47 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:03:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:47 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8840095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:47.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:48 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd864002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:48 compute-1 ceph-mon[80077]: pgmap v639: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:03:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:48.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:49 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd864002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:49 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd860002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:49.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:50 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd88400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:50 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:03:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:50 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:03:50 compute-1 ceph-mon[80077]: pgmap v640: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:03:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:50.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:51 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd864002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:03:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:51 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd864002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:51 compute-1 podman[231695]: 2025-12-07 10:03:51.602934481 +0000 UTC m=+0.092153753 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:03:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:51.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:52 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd860003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:52 compute-1 ceph-mon[80077]: pgmap v641: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 07 10:03:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:52.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:53 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd88400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:53 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:03:53 compute-1 sshd-session[231716]: Connection closed by 209.38.206.249 port 40086
Dec 07 10:03:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:53 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8780028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 10:03:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:53.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 10:03:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:54 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8780028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:54 compute-1 ceph-mon[80077]: pgmap v642: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 07 10:03:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:54.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:55 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd860003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:55 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd88400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:03:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:55.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:03:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:56 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8780028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:03:56 compute-1 ceph-mon[80077]: pgmap v643: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:03:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:56.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:57 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd864002470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:57 compute-1 sshd-session[231719]: Invalid user git from 104.248.193.130 port 32936
Dec 07 10:03:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:57 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd860003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:57 compute-1 sshd-session[231719]: Connection closed by invalid user git 104.248.193.130 port 32936 [preauth]
Dec 07 10:03:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:57.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:58 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd88400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:58 compute-1 ceph-mon[80077]: pgmap v644: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 10:03:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:03:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100358 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:03:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:03:58.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:03:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:59 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd88400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:03:59 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd88400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:03:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:03:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:03:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:03:59.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:00 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd858000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:00 compute-1 ceph-mon[80077]: pgmap v645: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:04:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/1536504346' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 07 10:04:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/596159274' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec 07 10:04:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:00.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:01 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd860003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:01 compute-1 sudo[231723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:04:01 compute-1 sudo[231723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:04:01 compute-1 sudo[231723]: pam_unix(sudo:session): session closed for user root
Dec 07 10:04:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:04:01 compute-1 ceph-mon[80077]: from='client.24613 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 07 10:04:01 compute-1 ceph-mon[80077]: from='client.24616 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 07 10:04:01 compute-1 ceph-mon[80077]: from='client.24613 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Dec 07 10:04:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:01 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd88400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:01 compute-1 sudo[231748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:04:01 compute-1 sudo[231748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:04:01 compute-1 sudo[231748]: pam_unix(sudo:session): session closed for user root
Dec 07 10:04:01 compute-1 sudo[231773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:04:01 compute-1 sudo[231773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:04:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:01.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:02 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8780028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:02 compute-1 sudo[231773]: pam_unix(sudo:session): session closed for user root
Dec 07 10:04:02 compute-1 ceph-mon[80077]: pgmap v646: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:04:02 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:04:02 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:04:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:02.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:03 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:03 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:04:03 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:04:03 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:04:03 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:04:03 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:04:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/861232648' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:04:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/861232648' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:04:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:03 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd860003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:03.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:04 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd88400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:04 compute-1 ceph-mon[80077]: pgmap v647: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Dec 07 10:04:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:04.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:05 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd854000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:05 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 07 10:04:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:05.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 07 10:04:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:06 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd88400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:06 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:04:06 compute-1 ceph-mon[80077]: pgmap v648: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec 07 10:04:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:06.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:07 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd88400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:07 compute-1 sudo[231833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:04:07 compute-1 sudo[231833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:04:07 compute-1 sudo[231833]: pam_unix(sudo:session): session closed for user root
Dec 07 10:04:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:07 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8540016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:07.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:07 compute-1 ceph-mon[80077]: pgmap v649: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 10:04:07 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:04:07 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:04:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:08 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd8580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:08.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:09 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd88400a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[231506]: 07/12/2025 10:04:09 : epoch 693550e7 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd88400a2b0 fd 39 proxy ignored for local
Dec 07 10:04:09 compute-1 kernel: ganesha.nfsd[231599]: segfault at 50 ip 00007fd92efe332e sp 00007fd8feffc210 error 4 in libntirpc.so.5.8[7fd92efc8000+2c000] likely on CPU 1 (core 0, socket 1)
Dec 07 10:04:09 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 10:04:09 compute-1 podman[231859]: 2025-12-07 10:04:09.595351481 +0000 UTC m=+0.090183567 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec 07 10:04:09 compute-1 systemd[1]: Started Process Core Dump (PID 231886/UID 0).
Dec 07 10:04:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:09.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:09 compute-1 ceph-mon[80077]: pgmap v650: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 07 10:04:10 compute-1 systemd-coredump[231887]: Process 231510 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007fd92efe332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 10:04:10 compute-1 systemd[1]: systemd-coredump@12-231886-0.service: Deactivated successfully.
Dec 07 10:04:10 compute-1 systemd[1]: systemd-coredump@12-231886-0.service: Consumed 1.044s CPU time.
Dec 07 10:04:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:10.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:10 compute-1 podman[231893]: 2025-12-07 10:04:10.747752879 +0000 UTC m=+0.024647153 container died dfb156753dafc9eecb891462320ff25341e0c9199d2d3177b5708e4f2e1c9884 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 07 10:04:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-e0b1127be4fdf67157e5fe441f166f787520017706f0558ac3efcb4a62d78c19-merged.mount: Deactivated successfully.
Dec 07 10:04:10 compute-1 podman[231893]: 2025-12-07 10:04:10.791780268 +0000 UTC m=+0.068674492 container remove dfb156753dafc9eecb891462320ff25341e0c9199d2d3177b5708e4f2e1c9884 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 07 10:04:10 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 10:04:11 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 10:04:11 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.500s CPU time.
Dec 07 10:04:11 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:04:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:11.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:12 compute-1 ceph-mon[80077]: pgmap v651: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 10:04:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:12.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:04:13 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/630089787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:04:13 compute-1 podman[231938]: 2025-12-07 10:04:13.589966956 +0000 UTC m=+0.090441484 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 07 10:04:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:13.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.886 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.887 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.904 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.904 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.904 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.920 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.920 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.921 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.921 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.922 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.922 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.922 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.922 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.958 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.959 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.960 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.960 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:04:13 compute-1 nova_compute[230488]: 2025-12-07 10:04:13.961 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:04:14 compute-1 ceph-mon[80077]: pgmap v652: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:04:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/196889449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:04:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3436173849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:04:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:04:14 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2611298998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:04:14 compute-1 nova_compute[230488]: 2025-12-07 10:04:14.476 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:04:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:14.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:14 compute-1 nova_compute[230488]: 2025-12-07 10:04:14.724 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:04:14 compute-1 nova_compute[230488]: 2025-12-07 10:04:14.725 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5271MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:04:14 compute-1 nova_compute[230488]: 2025-12-07 10:04:14.725 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:04:14 compute-1 nova_compute[230488]: 2025-12-07 10:04:14.725 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:04:14 compute-1 nova_compute[230488]: 2025-12-07 10:04:14.792 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:04:14 compute-1 nova_compute[230488]: 2025-12-07 10:04:14.792 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:04:14 compute-1 nova_compute[230488]: 2025-12-07 10:04:14.805 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:04:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2674510480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:04:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2611298998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:04:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:04:15 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/907430375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:04:15 compute-1 nova_compute[230488]: 2025-12-07 10:04:15.294 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:04:15 compute-1 nova_compute[230488]: 2025-12-07 10:04:15.301 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:04:15 compute-1 nova_compute[230488]: 2025-12-07 10:04:15.326 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:04:15 compute-1 nova_compute[230488]: 2025-12-07 10:04:15.328 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:04:15 compute-1 nova_compute[230488]: 2025-12-07 10:04:15.329 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:04:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100415 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:04:15 compute-1 nova_compute[230488]: 2025-12-07 10:04:15.676 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:04:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:15.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:16 compute-1 ceph-mon[80077]: pgmap v653: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 10:04:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/907430375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:04:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:04:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:16.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:17.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:18 compute-1 ceph-mon[80077]: pgmap v654: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:04:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:18.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:19.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:20 compute-1 ceph-mon[80077]: pgmap v655: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:04:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:20.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:21 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 13.
Dec 07 10:04:21 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:04:21 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.500s CPU time.
Dec 07 10:04:21 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 10:04:21 compute-1 sudo[232012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:04:21 compute-1 sudo[232012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:04:21 compute-1 sudo[232012]: pam_unix(sudo:session): session closed for user root
Dec 07 10:04:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:04:21 compute-1 podman[232078]: 2025-12-07 10:04:21.556103435 +0000 UTC m=+0.046461897 container create 5558e026e61dff85b27fb203b37d522cd936a1ed9d5d8502391fa347c3059d00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 10:04:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ccca33ed3f161e438298024a71576cbef28dcc7f173e6d8be92b72bdcec8592/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 10:04:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ccca33ed3f161e438298024a71576cbef28dcc7f173e6d8be92b72bdcec8592/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 10:04:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ccca33ed3f161e438298024a71576cbef28dcc7f173e6d8be92b72bdcec8592/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:04:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ccca33ed3f161e438298024a71576cbef28dcc7f173e6d8be92b72bdcec8592/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:04:21 compute-1 podman[232078]: 2025-12-07 10:04:21.537405676 +0000 UTC m=+0.027764158 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 10:04:21 compute-1 podman[232078]: 2025-12-07 10:04:21.645606982 +0000 UTC m=+0.135965484 container init 5558e026e61dff85b27fb203b37d522cd936a1ed9d5d8502391fa347c3059d00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec 07 10:04:21 compute-1 podman[232078]: 2025-12-07 10:04:21.660428255 +0000 UTC m=+0.150786727 container start 5558e026e61dff85b27fb203b37d522cd936a1ed9d5d8502391fa347c3059d00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 10:04:21 compute-1 bash[232078]: 5558e026e61dff85b27fb203b37d522cd936a1ed9d5d8502391fa347c3059d00
Dec 07 10:04:21 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:04:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:21 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 10:04:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:21 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 10:04:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:21 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 10:04:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:21 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 10:04:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:21 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 10:04:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:21 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 10:04:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:21 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 10:04:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:21 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:04:21 compute-1 podman[232098]: 2025-12-07 10:04:21.791391451 +0000 UTC m=+0.077501791 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec 07 10:04:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:21.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:22 compute-1 ceph-mon[80077]: pgmap v656: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:04:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:22.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:23.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:24 compute-1 ceph-mon[80077]: pgmap v657: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:04:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:24.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:25.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:26 compute-1 ceph-mon[80077]: pgmap v658: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:04:26 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:04:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:26.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:27 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:04:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:27 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:04:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:04:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:27.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:04:28 compute-1 ceph-mon[80077]: pgmap v659: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:04:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:04:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:28.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:29.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:30 compute-1 ceph-mon[80077]: pgmap v660: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Dec 07 10:04:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:30.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:31 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:04:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:31.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:32 compute-1 ceph-mon[80077]: pgmap v661: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:04:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:32.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 10:04:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:04:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:33.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:34 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:34 compute-1 ceph-mon[80077]: pgmap v662: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Dec 07 10:04:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:34.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:35 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:35 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:35.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:36 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:36 compute-1 ceph-mon[80077]: pgmap v663: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:04:36 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:04:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:36.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:37 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb0001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100437 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:04:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:37 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:37.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:38 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:38 compute-1 ceph-mon[80077]: pgmap v664: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:04:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:04:38.636 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:04:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:04:38.638 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:04:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:04:38.638 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:04:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:38.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:39 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:39 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:39.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:40 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:40 compute-1 ceph-mon[80077]: pgmap v665: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:04:40 compute-1 podman[232180]: 2025-12-07 10:04:40.645283853 +0000 UTC m=+0.135479790 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 07 10:04:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:40.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:41 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:41 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:04:41 compute-1 sudo[232206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:04:41 compute-1 sudo[232206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:04:41 compute-1 sudo[232206]: pam_unix(sudo:session): session closed for user root
Dec 07 10:04:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:41 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:41 compute-1 sshd-session[232214]: Invalid user deploy from 104.248.193.130 port 42004
Dec 07 10:04:41 compute-1 sshd-session[232214]: Connection closed by invalid user deploy 104.248.193.130 port 42004 [preauth]
Dec 07 10:04:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:41.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:42 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:42 compute-1 ceph-mon[80077]: pgmap v666: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 767 B/s wr, 2 op/s
Dec 07 10:04:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:42.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:43 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:04:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:43 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb00089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:43.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:44 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:44 compute-1 ceph-mon[80077]: pgmap v667: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:04:44 compute-1 podman[232235]: 2025-12-07 10:04:44.582973216 +0000 UTC m=+0.078925090 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 07 10:04:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:44.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:45 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:45 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:46 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb00089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:46.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:46 compute-1 ceph-mon[80077]: pgmap v668: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:04:46 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:04:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:46.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:47 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:47 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:48 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:48.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:48 compute-1 ceph-mon[80077]: pgmap v669: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:04:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:48.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:49 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb00089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:49 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb00089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:50 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:50.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:50 compute-1 ceph-mon[80077]: pgmap v670: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:04:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:50.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:51 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:04:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:51 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb00089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:52 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:52.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:52 compute-1 ceph-mon[80077]: pgmap v671: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:04:52 compute-1 podman[232260]: 2025-12-07 10:04:52.604819575 +0000 UTC m=+0.088418998 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 07 10:04:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:52.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:52 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:04:52.900 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:04:52 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:04:52.901 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:04:52 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:04:52.902 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:04:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:53 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:53 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:54 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb000a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:54.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:54 compute-1 ceph-mon[80077]: pgmap v672: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:04:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:54.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:55 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:55 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:56 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:56.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:04:56 compute-1 ceph-mon[80077]: pgmap v673: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 10:04:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:56.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:57 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb000a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:04:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:57 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:58 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:04:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:04:58.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:04:58 compute-1 ceph-mon[80077]: pgmap v674: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:04:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:04:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:04:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:04:58.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:04:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:59 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:04:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:04:59 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb000a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:00 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:00.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:00 compute-1 ceph-mon[80077]: pgmap v675: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:00.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:01 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:05:01 compute-1 sudo[232283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:05:01 compute-1 sudo[232283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:05:01 compute-1 sudo[232283]: pam_unix(sudo:session): session closed for user root
Dec 07 10:05:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:01 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:02 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb000a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:02.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:02 compute-1 ceph-mon[80077]: pgmap v676: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:02.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:03 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/191301356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:05:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/191301356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:05:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:03 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:04 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:04.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:04 compute-1 ceph-mon[80077]: pgmap v677: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:04 compute-1 sshd-session[232310]: Connection closed by 101.36.224.146 port 44946
Dec 07 10:05:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:04.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:05 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb000a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:05 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:06 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:06.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:06 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:05:06 compute-1 ceph-mon[80077]: pgmap v678: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 10:05:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:06.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:07 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac0013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:07 compute-1 sudo[232313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:05:07 compute-1 sudo[232313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:05:07 compute-1 sudo[232313]: pam_unix(sudo:session): session closed for user root
Dec 07 10:05:07 compute-1 sudo[232338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 07 10:05:07 compute-1 sudo[232338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:05:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:07 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb000a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:08 compute-1 podman[232435]: 2025-12-07 10:05:08.230597334 +0000 UTC m=+0.104772193 container exec 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 07 10:05:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:08 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb000a340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:08.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:08 compute-1 podman[232435]: 2025-12-07 10:05:08.354053176 +0000 UTC m=+0.228228005 container exec_died 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec 07 10:05:08 compute-1 ceph-mon[80077]: pgmap v679: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:08 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 07 10:05:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:08.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:08 compute-1 podman[232559]: 2025-12-07 10:05:08.973270816 +0000 UTC m=+0.069235486 container exec 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 10:05:08 compute-1 podman[232559]: 2025-12-07 10:05:08.9829737 +0000 UTC m=+0.078938320 container exec_died 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 10:05:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:09 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:09 compute-1 podman[232650]: 2025-12-07 10:05:09.448403693 +0000 UTC m=+0.080510844 container exec 5558e026e61dff85b27fb203b37d522cd936a1ed9d5d8502391fa347c3059d00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 07 10:05:09 compute-1 podman[232650]: 2025-12-07 10:05:09.462141007 +0000 UTC m=+0.094248118 container exec_died 5558e026e61dff85b27fb203b37d522cd936a1ed9d5d8502391fa347c3059d00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 07 10:05:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:09 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94001140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:09 compute-1 podman[232715]: 2025-12-07 10:05:09.709013719 +0000 UTC m=+0.062127623 container exec beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 10:05:09 compute-1 podman[232715]: 2025-12-07 10:05:09.748019311 +0000 UTC m=+0.101133175 container exec_died beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 10:05:10 compute-1 podman[232784]: 2025-12-07 10:05:10.041265786 +0000 UTC m=+0.077835431 container exec 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.28.2, release=1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, vcs-type=git, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec 07 10:05:10 compute-1 podman[232784]: 2025-12-07 10:05:10.074808369 +0000 UTC m=+0.111377964 container exec_died 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, name=keepalived, version=2.2.4, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, release=1793, vcs-type=git, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 07 10:05:10 compute-1 sudo[232338]: pam_unix(sudo:session): session closed for user root
Dec 07 10:05:10 compute-1 sudo[232819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:05:10 compute-1 sudo[232819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:05:10 compute-1 sudo[232819]: pam_unix(sudo:session): session closed for user root
Dec 07 10:05:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:10 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:10.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:10 compute-1 sudo[232844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:05:10 compute-1 sudo[232844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:05:10 compute-1 ceph-mon[80077]: pgmap v680: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:10 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:05:10 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:05:10 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:05:10 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:05:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:10.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:11 compute-1 sudo[232844]: pam_unix(sudo:session): session closed for user root
Dec 07 10:05:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:11 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:11 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:05:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 07 10:05:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 07 10:05:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:05:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:05:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:05:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:05:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:05:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:05:11 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:05:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:11 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:11 compute-1 podman[232899]: 2025-12-07 10:05:11.652828675 +0000 UTC m=+0.140621900 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 07 10:05:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:12 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94001140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:12.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:12 compute-1 ceph-mon[80077]: pgmap v681: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:05:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:12.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:13 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:13 compute-1 nova_compute[230488]: 2025-12-07 10:05:13.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:05:13 compute-1 nova_compute[230488]: 2025-12-07 10:05:13.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:05:13 compute-1 nova_compute[230488]: 2025-12-07 10:05:13.271 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:05:13 compute-1 nova_compute[230488]: 2025-12-07 10:05:13.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:05:13 compute-1 nova_compute[230488]: 2025-12-07 10:05:13.272 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:05:13 compute-1 nova_compute[230488]: 2025-12-07 10:05:13.296 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:05:13 compute-1 nova_compute[230488]: 2025-12-07 10:05:13.297 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:05:13 compute-1 nova_compute[230488]: 2025-12-07 10:05:13.297 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:05:13 compute-1 nova_compute[230488]: 2025-12-07 10:05:13.298 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:05:13 compute-1 nova_compute[230488]: 2025-12-07 10:05:13.298 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:05:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:13 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:13 compute-1 ceph-mon[80077]: pgmap v682: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:13 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:05:13 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3486218841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:05:13 compute-1 nova_compute[230488]: 2025-12-07 10:05:13.785 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:05:14 compute-1 nova_compute[230488]: 2025-12-07 10:05:14.061 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:05:14 compute-1 nova_compute[230488]: 2025-12-07 10:05:14.063 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5189MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:05:14 compute-1 nova_compute[230488]: 2025-12-07 10:05:14.063 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:05:14 compute-1 nova_compute[230488]: 2025-12-07 10:05:14.064 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:05:14 compute-1 nova_compute[230488]: 2025-12-07 10:05:14.160 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:05:14 compute-1 nova_compute[230488]: 2025-12-07 10:05:14.160 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:05:14 compute-1 nova_compute[230488]: 2025-12-07 10:05:14.181 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:05:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:14 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:14.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3486218841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:05:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2536224671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:05:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1265336556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:05:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:05:14 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3345443778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:05:14 compute-1 nova_compute[230488]: 2025-12-07 10:05:14.745 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:05:14 compute-1 nova_compute[230488]: 2025-12-07 10:05:14.755 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:05:14 compute-1 nova_compute[230488]: 2025-12-07 10:05:14.771 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:05:14 compute-1 nova_compute[230488]: 2025-12-07 10:05:14.773 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:05:14 compute-1 nova_compute[230488]: 2025-12-07 10:05:14.774 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:05:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:14.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:15 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94002030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:15 compute-1 podman[232973]: 2025-12-07 10:05:15.612996072 +0000 UTC m=+0.102024319 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:05:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:15 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3345443778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:05:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3404941440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:05:15 compute-1 ceph-mon[80077]: pgmap v683: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 10:05:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4006115281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:05:15 compute-1 nova_compute[230488]: 2025-12-07 10:05:15.769 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:05:15 compute-1 nova_compute[230488]: 2025-12-07 10:05:15.770 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:05:15 compute-1 nova_compute[230488]: 2025-12-07 10:05:15.771 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:05:15 compute-1 nova_compute[230488]: 2025-12-07 10:05:15.771 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:05:15 compute-1 nova_compute[230488]: 2025-12-07 10:05:15.801 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:05:15 compute-1 nova_compute[230488]: 2025-12-07 10:05:15.801 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:05:15 compute-1 nova_compute[230488]: 2025-12-07 10:05:15.802 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:05:15 compute-1 nova_compute[230488]: 2025-12-07 10:05:15.803 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:05:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:16 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:16.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:05:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:16.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:17 compute-1 sudo[232994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:05:17 compute-1 sudo[232994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:05:17 compute-1 sudo[232994]: pam_unix(sudo:session): session closed for user root
Dec 07 10:05:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:17 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:17 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94002030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:17 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:05:17 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:05:17 compute-1 ceph-mon[80077]: pgmap v684: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:18 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:18.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:05:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:18.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:05:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:19 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:19 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:19 compute-1 ceph-mon[80077]: pgmap v685: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:20 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94002d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:20.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:20.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:21 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94002d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:05:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:21 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:21 compute-1 sudo[233021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:05:21 compute-1 sudo[233021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:05:21 compute-1 sudo[233021]: pam_unix(sudo:session): session closed for user root
Dec 07 10:05:21 compute-1 ceph-mon[80077]: pgmap v686: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:22 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:22.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:22.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:23 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94002d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:23 compute-1 podman[233047]: 2025-12-07 10:05:23.612941513 +0000 UTC m=+0.103600282 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 07 10:05:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:23 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:24 compute-1 ceph-mon[80077]: pgmap v687: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:24 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:24.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:24.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:25 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:25 compute-1 sshd-session[233069]: Invalid user test from 104.248.193.130 port 49582
Dec 07 10:05:25 compute-1 sshd-session[233069]: Connection closed by invalid user test 104.248.193.130 port 49582 [preauth]
Dec 07 10:05:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:25 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94002d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:26 compute-1 ceph-mon[80077]: pgmap v688: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 10:05:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:26 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94002d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:26.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:26 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:05:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:26.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:27 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:27 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:28 compute-1 ceph-mon[80077]: pgmap v689: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:05:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:28 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:28.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:28.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:29 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:29 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:30 compute-1 ceph-mon[80077]: pgmap v690: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:30 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:30.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:30.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:31 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:31 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:05:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:31 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:32 compute-1 ceph-mon[80077]: pgmap v691: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:32 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:32.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:32.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac003340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:34 compute-1 ceph-mon[80077]: pgmap v692: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:34 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:34.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:34.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:35 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:35 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:36 compute-1 ceph-mon[80077]: pgmap v693: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 10:05:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:36 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:36.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:36 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:05:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:36.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:37 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:37 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:38 compute-1 ceph-mon[80077]: pgmap v694: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:38 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:38.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:05:38.637 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:05:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:05:38.637 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:05:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:05:38.638 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:05:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:38.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:39 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:39 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:40 compute-1 ceph-mon[80077]: pgmap v695: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:40 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:40.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:40.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:41 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:41 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:05:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:41 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:41 compute-1 sudo[233079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:05:41 compute-1 sudo[233079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:05:41 compute-1 sudo[233079]: pam_unix(sudo:session): session closed for user root
Dec 07 10:05:41 compute-1 podman[233103]: 2025-12-07 10:05:41.944924043 +0000 UTC m=+0.106551482 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:05:42 compute-1 ceph-mon[80077]: pgmap v696: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:42 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:42.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:42.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:43 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:05:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:43 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:44 compute-1 ceph-mon[80077]: pgmap v697: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.205128) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101944205163, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2379, "num_deletes": 251, "total_data_size": 6190316, "memory_usage": 6265216, "flush_reason": "Manual Compaction"}
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101944236408, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4047840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21057, "largest_seqno": 23431, "table_properties": {"data_size": 4038214, "index_size": 6056, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19723, "raw_average_key_size": 20, "raw_value_size": 4019136, "raw_average_value_size": 4126, "num_data_blocks": 265, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765101730, "oldest_key_time": 1765101730, "file_creation_time": 1765101944, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 31356 microseconds, and 9788 cpu microseconds.
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.236477) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4047840 bytes OK
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.236502) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.238715) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.238741) EVENT_LOG_v1 {"time_micros": 1765101944238734, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.238766) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6179940, prev total WAL file size 6179940, number of live WAL files 2.
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.241193) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(3952KB)], [39(12MB)]
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101944241271, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 17630804, "oldest_snapshot_seqno": -1}
Dec 07 10:05:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:44 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:44.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5560 keys, 15441441 bytes, temperature: kUnknown
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101944390097, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 15441441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15401615, "index_size": 24804, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 140199, "raw_average_key_size": 25, "raw_value_size": 15298460, "raw_average_value_size": 2751, "num_data_blocks": 1025, "num_entries": 5560, "num_filter_entries": 5560, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765101944, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.390343) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15441441 bytes
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.391935) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.4 rd, 103.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 13.0 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 6080, records dropped: 520 output_compression: NoCompression
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.391956) EVENT_LOG_v1 {"time_micros": 1765101944391946, "job": 22, "event": "compaction_finished", "compaction_time_micros": 148883, "compaction_time_cpu_micros": 37214, "output_level": 6, "num_output_files": 1, "total_output_size": 15441441, "num_input_records": 6080, "num_output_records": 5560, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101944392929, "job": 22, "event": "table_file_deletion", "file_number": 41}
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765101944395950, "job": 22, "event": "table_file_deletion", "file_number": 39}
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.241111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.396064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.396070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.396073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.396075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:05:44 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:05:44.396076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:05:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:44.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:45 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:45 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:46 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb00020b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:46.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:46 compute-1 ceph-mon[80077]: pgmap v698: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 10:05:46 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:05:46 compute-1 podman[233135]: 2025-12-07 10:05:46.582607517 +0000 UTC m=+0.061615848 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 07 10:05:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:46.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:47 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:47 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:48 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:48.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:48 compute-1 ceph-mon[80077]: pgmap v699: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:48.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:49 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb00020b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:49 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:50 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:50.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:50 compute-1 ceph-mon[80077]: pgmap v700: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:50.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:51 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:05:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:51 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb0009100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:52 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:52.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:52 compute-1 ceph-mon[80077]: pgmap v701: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:52.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:53 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:53 compute-1 ceph-mon[80077]: pgmap v702: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:53 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:54 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb0009100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:54.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:54 compute-1 podman[233159]: 2025-12-07 10:05:54.576060391 +0000 UTC m=+0.075677091 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 07 10:05:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:05:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:54.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:05:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:55 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:55 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:56 compute-1 ceph-mon[80077]: pgmap v703: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 10:05:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:56 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:56.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:05:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:56.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:57 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb0009100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:05:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:57 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:58 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:05:58.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:58 compute-1 ceph-mon[80077]: pgmap v704: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:05:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:05:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:05:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:05:58.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:05:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:59 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:05:59 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:05:59 compute-1 ceph-mon[80077]: pgmap v705: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:00 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:00.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:00.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:01 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:06:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:01 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb0009100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:01 compute-1 sudo[233182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:06:01 compute-1 sudo[233182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:06:01 compute-1 sudo[233182]: pam_unix(sudo:session): session closed for user root
Dec 07 10:06:02 compute-1 ceph-mon[80077]: pgmap v706: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:02 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:02.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:02.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/3338351440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:06:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/3338351440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:06:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:03 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:03 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:04 compute-1 ceph-mon[80077]: pgmap v707: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:04 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb0009100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:04.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:04.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:05 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:05 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:06 compute-1 ceph-mon[80077]: pgmap v708: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 10:06:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:06 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:06.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:06 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:06:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:06.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:07 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb0009100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:07 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:08 compute-1 ceph-mon[80077]: pgmap v709: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:08 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:08.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:08.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:09 compute-1 sshd-session[233211]: Invalid user nagios from 104.248.193.130 port 43564
Dec 07 10:06:09 compute-1 sshd-session[233211]: Connection closed by invalid user nagios 104.248.193.130 port 43564 [preauth]
Dec 07 10:06:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:09 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:09 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb0009100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:10 compute-1 ceph-mon[80077]: pgmap v710: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:10 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:06:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:10.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:06:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:10.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:11 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:11 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:06:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:11 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:12 compute-1 ceph-mon[80077]: pgmap v711: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:12 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb0009100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:12.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:12 compute-1 podman[233215]: 2025-12-07 10:06:12.662910689 +0000 UTC m=+0.151015152 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 07 10:06:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:06:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:12.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:06:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:06:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:13 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:13 compute-1 nova_compute[230488]: 2025-12-07 10:06:13.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:06:13 compute-1 nova_compute[230488]: 2025-12-07 10:06:13.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:06:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:13 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94003fe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:14 compute-1 ceph-mon[80077]: pgmap v712: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3603045892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:06:14 compute-1 nova_compute[230488]: 2025-12-07 10:06:14.265 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:06:14 compute-1 nova_compute[230488]: 2025-12-07 10:06:14.289 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:06:14 compute-1 nova_compute[230488]: 2025-12-07 10:06:14.289 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:06:14 compute-1 nova_compute[230488]: 2025-12-07 10:06:14.289 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:06:14 compute-1 nova_compute[230488]: 2025-12-07 10:06:14.316 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:06:14 compute-1 nova_compute[230488]: 2025-12-07 10:06:14.316 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:06:14 compute-1 nova_compute[230488]: 2025-12-07 10:06:14.317 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:06:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:14 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:14.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:14.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2333995897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:06:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1573656697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:06:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3698721280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:06:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:15 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bb0009100 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.294 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.294 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.294 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.294 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.295 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:06:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:15 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:06:15 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3230038800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.773 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.938 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.941 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5245MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.942 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:06:15 compute-1 nova_compute[230488]: 2025-12-07 10:06:15.942 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:06:16 compute-1 nova_compute[230488]: 2025-12-07 10:06:16.017 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:06:16 compute-1 nova_compute[230488]: 2025-12-07 10:06:16.018 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:06:16 compute-1 nova_compute[230488]: 2025-12-07 10:06:16.032 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:06:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:16 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:06:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:16.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:16 compute-1 ceph-mon[80077]: pgmap v713: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 10:06:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3230038800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:06:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:06:16 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3443804939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:06:16 compute-1 nova_compute[230488]: 2025-12-07 10:06:16.553 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:06:16 compute-1 nova_compute[230488]: 2025-12-07 10:06:16.559 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:06:16 compute-1 nova_compute[230488]: 2025-12-07 10:06:16.622 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:06:16 compute-1 nova_compute[230488]: 2025-12-07 10:06:16.623 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:06:16 compute-1 nova_compute[230488]: 2025-12-07 10:06:16.624 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:06:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:16.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:17 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:17 compute-1 sudo[233289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:06:17 compute-1 sudo[233289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:06:17 compute-1 sudo[233289]: pam_unix(sudo:session): session closed for user root
Dec 07 10:06:17 compute-1 sudo[233320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:06:17 compute-1 sudo[233320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:06:17 compute-1 podman[233313]: 2025-12-07 10:06:17.40471895 +0000 UTC m=+0.066863322 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible)
Dec 07 10:06:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3443804939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:06:17 compute-1 nova_compute[230488]: 2025-12-07 10:06:17.619 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:06:17 compute-1 nova_compute[230488]: 2025-12-07 10:06:17.619 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:06:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:17 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:17 compute-1 sudo[233320]: pam_unix(sudo:session): session closed for user root
Dec 07 10:06:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:18 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:18.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:18.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:19 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:19 compute-1 ceph-mon[80077]: pgmap v714: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:19 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:06:19 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:06:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:19 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:20 compute-1 ceph-mon[80077]: pgmap v715: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:06:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:06:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:06:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:06:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:06:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:20 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:06:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:20.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:06:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:20.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:21 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:06:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:21 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94004020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:22 compute-1 sudo[233391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:06:22 compute-1 sudo[233391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:06:22 compute-1 sudo[233391]: pam_unix(sudo:session): session closed for user root
Dec 07 10:06:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:22 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:22.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:22 compute-1 ceph-mon[80077]: pgmap v716: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:22.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:23 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:23 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:24 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94004040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:24.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:24 compute-1 ceph-mon[80077]: pgmap v717: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:24.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:25 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:25 compute-1 sudo[233418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:06:25 compute-1 sudo[233418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:06:25 compute-1 sudo[233418]: pam_unix(sudo:session): session closed for user root
Dec 07 10:06:25 compute-1 podman[233442]: 2025-12-07 10:06:25.304036608 +0000 UTC m=+0.078146020 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 07 10:06:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:25 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:26 compute-1 ceph-mon[80077]: pgmap v718: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec 07 10:06:26 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:06:26 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:06:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:26 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80003b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:26 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:06:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:26.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:06:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:26.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:06:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:27 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94004060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:27 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6bac004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - - [07/Dec/2025:10:06:27.913 +0000] "GET /swift/info HTTP/1.1" 200 539 - "python-urllib3/1.26.5" - latency=0.001000027s
Dec 07 10:06:28 compute-1 ceph-mon[80077]: pgmap v719: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:06:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:28 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:28.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:28.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:29 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001b80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:29 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:30 compute-1 ceph-mon[80077]: pgmap v720: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:30 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:30.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:30.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:31 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:31 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:06:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:31 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:32 compute-1 ceph-mon[80077]: pgmap v721: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:32 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:32.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:32.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:33 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Dec 07 10:06:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:33 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:34 compute-1 ceph-mon[80077]: pgmap v722: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:06:34 compute-1 ceph-mon[80077]: osdmap e149: 3 total, 3 up, 3 in
Dec 07 10:06:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Dec 07 10:06:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:34 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:34.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:34.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:35 compute-1 ceph-mon[80077]: osdmap e150: 3 total, 3 up, 3 in
Dec 07 10:06:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Dec 07 10:06:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:35 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:35 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004490 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:36 compute-1 ceph-mon[80077]: pgmap v725: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Dec 07 10:06:36 compute-1 ceph-mon[80077]: osdmap e151: 3 total, 3 up, 3 in
Dec 07 10:06:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:36 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100636 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:06:36 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:06:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:36.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:36.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Dec 07 10:06:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:37 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:37 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:38 compute-1 ceph-mon[80077]: pgmap v727: 337 pgs: 337 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.8 KiB/s wr, 16 op/s
Dec 07 10:06:38 compute-1 ceph-mon[80077]: osdmap e152: 3 total, 3 up, 3 in
Dec 07 10:06:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:38 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004490 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:38.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:06:38.639 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:06:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:06:38.639 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:06:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:06:38.639 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:06:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:38.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:39 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:39 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b94004080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:40 compute-1 ceph-mon[80077]: pgmap v729: 337 pgs: 337 active+clean; 24 MiB data, 182 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.1 MiB/s wr, 42 op/s
Dec 07 10:06:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:40 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:40.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:40.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:41 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004490 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:06:41 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:06:41 compute-1 kernel: ganesha.nfsd[233286]: segfault at 50 ip 00007f6c5cab632e sp 00007f6c21ffa210 error 4 in libntirpc.so.5.8[7f6c5ca9b000+2c000] likely on CPU 5 (core 0, socket 5)
Dec 07 10:06:41 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 10:06:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[232093]: 07/12/2025 10:06:41 : epoch 69355125 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 38 proxy ignored for local
Dec 07 10:06:41 compute-1 systemd[1]: Started Process Core Dump (PID 233471/UID 0).
Dec 07 10:06:41 compute-1 ceph-mon[80077]: pgmap v730: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 6.0 MiB/s wr, 55 op/s
Dec 07 10:06:42 compute-1 sudo[233473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:06:42 compute-1 sudo[233473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:06:42 compute-1 sudo[233473]: pam_unix(sudo:session): session closed for user root
Dec 07 10:06:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:42.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Dec 07 10:06:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:06:42 compute-1 ceph-mon[80077]: osdmap e153: 3 total, 3 up, 3 in
Dec 07 10:06:42 compute-1 systemd-coredump[233472]: Process 232097 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 61:
                                                    #0  0x00007f6c5cab632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 10:06:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:42.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:42 compute-1 systemd[1]: systemd-coredump@13-233471-0.service: Deactivated successfully.
Dec 07 10:06:42 compute-1 systemd[1]: systemd-coredump@13-233471-0.service: Consumed 1.180s CPU time.
Dec 07 10:06:43 compute-1 podman[233504]: 2025-12-07 10:06:43.065798941 +0000 UTC m=+0.036456734 container died 5558e026e61dff85b27fb203b37d522cd936a1ed9d5d8502391fa347c3059d00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 10:06:43 compute-1 systemd[1]: var-lib-containers-storage-overlay-8ccca33ed3f161e438298024a71576cbef28dcc7f173e6d8be92b72bdcec8592-merged.mount: Deactivated successfully.
Dec 07 10:06:43 compute-1 podman[233504]: 2025-12-07 10:06:43.121307433 +0000 UTC m=+0.091965196 container remove 5558e026e61dff85b27fb203b37d522cd936a1ed9d5d8502391fa347c3059d00 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 07 10:06:43 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 10:06:43 compute-1 podman[233503]: 2025-12-07 10:06:43.142242704 +0000 UTC m=+0.110063090 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec 07 10:06:43 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 10:06:43 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.902s CPU time.
Dec 07 10:06:43 compute-1 ceph-mon[80077]: pgmap v732: 337 pgs: 337 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.2 MiB/s wr, 36 op/s
Dec 07 10:06:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:44.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:44.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:45 compute-1 ceph-mon[80077]: pgmap v733: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.1 MiB/s wr, 36 op/s
Dec 07 10:06:46 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:06:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:46.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:46.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:47 compute-1 podman[233573]: 2025-12-07 10:06:47.593345032 +0000 UTC m=+0.087213016 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.627420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102007627453, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 929, "num_deletes": 252, "total_data_size": 1948839, "memory_usage": 1971496, "flush_reason": "Manual Compaction"}
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102007635398, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 864013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23436, "largest_seqno": 24360, "table_properties": {"data_size": 860338, "index_size": 1391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9433, "raw_average_key_size": 20, "raw_value_size": 852531, "raw_average_value_size": 1837, "num_data_blocks": 61, "num_entries": 464, "num_filter_entries": 464, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765101945, "oldest_key_time": 1765101945, "file_creation_time": 1765102007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 8009 microseconds, and 3091 cpu microseconds.
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.635427) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 864013 bytes OK
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.635443) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.636874) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.636888) EVENT_LOG_v1 {"time_micros": 1765102007636884, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.636901) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1944172, prev total WAL file size 1944172, number of live WAL files 2.
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.637591) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(843KB)], [42(14MB)]
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102007637712, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16305454, "oldest_snapshot_seqno": -1}
Dec 07 10:06:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100647 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5529 keys, 12614212 bytes, temperature: kUnknown
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102007776421, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 12614212, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12578372, "index_size": 20946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 139988, "raw_average_key_size": 25, "raw_value_size": 12479427, "raw_average_value_size": 2257, "num_data_blocks": 856, "num_entries": 5529, "num_filter_entries": 5529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765102007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.776827) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12614212 bytes
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.778230) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.5 rd, 90.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 14.7 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(33.5) write-amplify(14.6) OK, records in: 6024, records dropped: 495 output_compression: NoCompression
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.778252) EVENT_LOG_v1 {"time_micros": 1765102007778241, "job": 24, "event": "compaction_finished", "compaction_time_micros": 138810, "compaction_time_cpu_micros": 48853, "output_level": 6, "num_output_files": 1, "total_output_size": 12614212, "num_input_records": 6024, "num_output_records": 5529, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102007778514, "job": 24, "event": "table_file_deletion", "file_number": 44}
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102007781675, "job": 24, "event": "table_file_deletion", "file_number": 42}
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.637405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.781735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.781743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.781747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.781752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:06:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:06:47.781756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:06:48 compute-1 ceph-mon[80077]: pgmap v734: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.2 MiB/s wr, 29 op/s
Dec 07 10:06:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:48.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:48.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:50 compute-1 ceph-mon[80077]: pgmap v735: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 8.8 KiB/s rd, 1.7 MiB/s wr, 14 op/s
Dec 07 10:06:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:50.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:50.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:06:52 compute-1 ceph-mon[80077]: pgmap v736: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s
Dec 07 10:06:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:52.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:52 compute-1 sshd-session[233597]: Invalid user guest from 104.248.193.130 port 54150
Dec 07 10:06:52 compute-1 sshd-session[233597]: Connection closed by invalid user guest 104.248.193.130 port 54150 [preauth]
Dec 07 10:06:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:52.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:53 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 14.
Dec 07 10:06:53 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:06:53 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.902s CPU time.
Dec 07 10:06:53 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 10:06:53 compute-1 podman[233646]: 2025-12-07 10:06:53.737179255 +0000 UTC m=+0.047271459 container create 0671de3c02ceb46f10ee8ba36b1872130586741104affd1ac3189424b44908b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 10:06:53 compute-1 podman[233646]: 2025-12-07 10:06:53.717155829 +0000 UTC m=+0.027248013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 10:06:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4fa6211d7ba55e6442446b56225ceaa3a0ee00aa882ba53f63877749fed254/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 10:06:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4fa6211d7ba55e6442446b56225ceaa3a0ee00aa882ba53f63877749fed254/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:06:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4fa6211d7ba55e6442446b56225ceaa3a0ee00aa882ba53f63877749fed254/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 10:06:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4fa6211d7ba55e6442446b56225ceaa3a0ee00aa882ba53f63877749fed254/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:06:53 compute-1 podman[233646]: 2025-12-07 10:06:53.834424694 +0000 UTC m=+0.144516878 container init 0671de3c02ceb46f10ee8ba36b1872130586741104affd1ac3189424b44908b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 07 10:06:53 compute-1 podman[233646]: 2025-12-07 10:06:53.844285943 +0000 UTC m=+0.154378107 container start 0671de3c02ceb46f10ee8ba36b1872130586741104affd1ac3189424b44908b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 10:06:53 compute-1 bash[233646]: 0671de3c02ceb46f10ee8ba36b1872130586741104affd1ac3189424b44908b8
Dec 07 10:06:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:53 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 10:06:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:53 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 10:06:53 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:06:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:53 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 10:06:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:53 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 10:06:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:53 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 10:06:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:53 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 10:06:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:53 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 10:06:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:53 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:06:54 compute-1 ceph-mon[80077]: pgmap v737: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 494 B/s wr, 2 op/s
Dec 07 10:06:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:54.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:54 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:06:54.473 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:06:54 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:06:54.474 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:06:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:54.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:55 compute-1 podman[233706]: 2025-12-07 10:06:55.592302993 +0000 UTC m=+0.088005459 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 07 10:06:56 compute-1 ceph-mon[80077]: pgmap v738: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 767 B/s wr, 3 op/s
Dec 07 10:06:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:06:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:06:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:56.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:06:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:56.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:58 compute-1 ceph-mon[80077]: pgmap v739: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Dec 07 10:06:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:06:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:06:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:06:58.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:06:58 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:06:58.476 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:06:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:06:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:06:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:06:58.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:06:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:59 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 07 10:06:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:59 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 07 10:06:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:59 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:06:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:59 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:06:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:06:59 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 07 10:07:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:00 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:07:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:00 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:07:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:00 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:07:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:00 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 07 10:07:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:00 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:07:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:00 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:07:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:00 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:07:00 compute-1 ceph-mon[80077]: pgmap v740: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Dec 07 10:07:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:00.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:00.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:07:02 compute-1 sudo[233728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:07:02 compute-1 sudo[233728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:07:02 compute-1 sudo[233728]: pam_unix(sudo:session): session closed for user root
Dec 07 10:07:02 compute-1 ceph-mon[80077]: pgmap v741: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 852 B/s wr, 3 op/s
Dec 07 10:07:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100702 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:07:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:02.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 07 10:07:02 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3465607308' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:07:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 07 10:07:02 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3465607308' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:07:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:02.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/3465607308' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:07:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/3465607308' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:07:04 compute-1 ceph-mon[80077]: pgmap v742: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Dec 07 10:07:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:04.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:04.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000024:nfs.cephfs.0: -2
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 10:07:06 compute-1 ceph-mon[80077]: pgmap v743: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Dec 07 10:07:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:06 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:07:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:06.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:06.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:07 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8000fb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:07 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:08 compute-1 ceph-mon[80077]: pgmap v744: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:07:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:08 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:08.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:09.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:09 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100709 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:07:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:09 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:10 compute-1 ceph-mon[80077]: pgmap v745: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:07:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:10 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3940016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:10.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:11.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:11 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:11 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/910571761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:07:11 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:07:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:11 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:12 compute-1 nova_compute[230488]: 2025-12-07 10:07:12.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:07:12 compute-1 nova_compute[230488]: 2025-12-07 10:07:12.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 07 10:07:12 compute-1 nova_compute[230488]: 2025-12-07 10:07:12.314 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 07 10:07:12 compute-1 nova_compute[230488]: 2025-12-07 10:07:12.315 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:07:12 compute-1 nova_compute[230488]: 2025-12-07 10:07:12.316 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 07 10:07:12 compute-1 nova_compute[230488]: 2025-12-07 10:07:12.332 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:07:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:12 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:12 compute-1 ceph-mon[80077]: pgmap v746: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:07:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:12.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:13.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:13 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3940016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:07:13 compute-1 podman[233774]: 2025-12-07 10:07:13.64859984 +0000 UTC m=+0.136648374 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ovn_controller)
Dec 07 10:07:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:13 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:14 compute-1 nova_compute[230488]: 2025-12-07 10:07:14.346 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:07:14 compute-1 nova_compute[230488]: 2025-12-07 10:07:14.346 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:07:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:14 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:14.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:14 compute-1 ceph-mon[80077]: pgmap v747: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 07 10:07:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3268852474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:07:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1592531399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:07:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Dec 07 10:07:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:15.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:15 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:15 compute-1 nova_compute[230488]: 2025-12-07 10:07:15.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:07:15 compute-1 nova_compute[230488]: 2025-12-07 10:07:15.269 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:07:15 compute-1 nova_compute[230488]: 2025-12-07 10:07:15.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:07:15 compute-1 nova_compute[230488]: 2025-12-07 10:07:15.293 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:07:15 compute-1 nova_compute[230488]: 2025-12-07 10:07:15.293 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:07:15 compute-1 nova_compute[230488]: 2025-12-07 10:07:15.293 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:07:15 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Dec 07 10:07:15 compute-1 ceph-mon[80077]: osdmap e154: 3 total, 3 up, 3 in
Dec 07 10:07:15 compute-1 ceph-mon[80077]: pgmap v749: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 0 B/s wr, 8 op/s
Dec 07 10:07:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/765336818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:07:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1650379800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:07:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:15 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3940016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.268 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.297 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.297 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.297 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.297 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.298 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:07:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:16 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b80089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:07:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:16.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:07:16 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3301732007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.800 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.978 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.980 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5235MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.980 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:07:16 compute-1 nova_compute[230488]: 2025-12-07 10:07:16.981 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:07:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:17.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:17 compute-1 ceph-mon[80077]: osdmap e155: 3 total, 3 up, 3 in
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.108 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.109 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.184 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing inventories for resource provider 58b51610-0751-43d9-94a3-66540bffec81 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 07 10:07:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:17 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.288 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Updating ProviderTree inventory for provider 58b51610-0751-43d9-94a3-66540bffec81 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.290 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Updating inventory in ProviderTree for provider 58b51610-0751-43d9-94a3-66540bffec81 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.313 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing aggregate associations for resource provider 58b51610-0751-43d9-94a3-66540bffec81, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.339 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing trait associations for resource provider 58b51610-0751-43d9-94a3-66540bffec81, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.361 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:07:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:17 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c001920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:07:17 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3530416204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.832 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.839 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.861 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.864 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:07:17 compute-1 nova_compute[230488]: 2025-12-07 10:07:17.864 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:07:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3301732007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:07:18 compute-1 ceph-mon[80077]: pgmap v751: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Dec 07 10:07:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3530416204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:07:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2929662140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:07:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:18 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:18.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:18 compute-1 podman[233849]: 2025-12-07 10:07:18.594910529 +0000 UTC m=+0.095884433 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 07 10:07:18 compute-1 nova_compute[230488]: 2025-12-07 10:07:18.865 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:07:18 compute-1 nova_compute[230488]: 2025-12-07 10:07:18.865 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:07:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:19.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:19 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b80089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:19 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:19 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1103610747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:07:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:20 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003590 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:20.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:20 compute-1 ceph-mon[80077]: pgmap v752: 337 pgs: 337 active+clean; 87 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 48 op/s
Dec 07 10:07:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:21.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:21 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:07:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:21 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b80096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:21 compute-1 ceph-mon[80077]: pgmap v753: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Dec 07 10:07:22 compute-1 sudo[233872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:07:22 compute-1 sudo[233872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:07:22 compute-1 sudo[233872]: pam_unix(sudo:session): session closed for user root
Dec 07 10:07:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:22 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:22.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Dec 07 10:07:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:23.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:23 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003590 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:23 compute-1 ceph-mon[80077]: osdmap e156: 3 total, 3 up, 3 in
Dec 07 10:07:23 compute-1 ceph-mon[80077]: pgmap v755: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Dec 07 10:07:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:23 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:24 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b80096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:24.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:25.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:25 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:25 compute-1 sudo[233898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:07:25 compute-1 sudo[233898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:07:25 compute-1 sudo[233898]: pam_unix(sudo:session): session closed for user root
Dec 07 10:07:25 compute-1 sudo[233923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:07:25 compute-1 sudo[233923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:07:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:25 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:26 compute-1 ceph-mon[80077]: pgmap v756: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.3 MiB/s wr, 89 op/s
Dec 07 10:07:26 compute-1 sudo[233923]: pam_unix(sudo:session): session closed for user root
Dec 07 10:07:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:26 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:26 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:07:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:26.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:26 compute-1 podman[233982]: 2025-12-07 10:07:26.56270099 +0000 UTC m=+0.063930873 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 07 10:07:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:27.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:27 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b80096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:27 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b80096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:28 compute-1 ceph-mon[80077]: pgmap v757: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 83 op/s
Dec 07 10:07:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:07:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:28 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:28.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:29.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:29 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:29 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:30 compute-1 ceph-mon[80077]: pgmap v758: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 90 op/s
Dec 07 10:07:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:07:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:07:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:07:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:07:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:07:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:07:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:07:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:07:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:07:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:30 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:30.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:31.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:31 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:31 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:07:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:31 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b80096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:32 compute-1 ceph-mon[80077]: pgmap v759: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 07 10:07:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:32 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:32.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:33.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:33 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:33 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:34 compute-1 ceph-mon[80077]: pgmap v760: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 85 op/s
Dec 07 10:07:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:34 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b80096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:34 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 07 10:07:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:34.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:35.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:35 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:35 compute-1 sudo[234007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:07:35 compute-1 sudo[234007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:07:35 compute-1 sudo[234007]: pam_unix(sudo:session): session closed for user root
Dec 07 10:07:35 compute-1 sshd-session[234005]: Invalid user weblogic from 104.248.193.130 port 44298
Dec 07 10:07:35 compute-1 sshd-session[234005]: Connection closed by invalid user weblogic 104.248.193.130 port 44298 [preauth]
Dec 07 10:07:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:35 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:36 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003eb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:36 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:07:36 compute-1 ceph-mon[80077]: pgmap v761: 337 pgs: 337 active+clean; 95 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 500 KiB/s wr, 79 op/s
Dec 07 10:07:36 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:07:36 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:07:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:36.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:37.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:37 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b80096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:37 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b80096e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:38 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:38 compute-1 ceph-mon[80077]: pgmap v762: 337 pgs: 337 active+clean; 95 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1004 KiB/s rd, 487 KiB/s wr, 38 op/s
Dec 07 10:07:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:38.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:07:38.641 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:07:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:07:38.641 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:07:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:07:38.641 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:07:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:39.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:39 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff388000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:39 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:40 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c000d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:40 compute-1 ceph-mon[80077]: pgmap v763: 337 pgs: 337 active+clean; 120 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 89 op/s
Dec 07 10:07:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:40.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:41.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:41 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:41 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:07:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:41 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:42 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:42 compute-1 sudo[234039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:07:42 compute-1 sudo[234039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:07:42 compute-1 sudo[234039]: pam_unix(sudo:session): session closed for user root
Dec 07 10:07:42 compute-1 ceph-mon[80077]: pgmap v764: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 07 10:07:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:07:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:42.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:43.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:43 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c0018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:43 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:44 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:44.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:44 compute-1 podman[234065]: 2025-12-07 10:07:44.628887147 +0000 UTC m=+0.122313572 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 07 10:07:44 compute-1 ceph-mon[80077]: pgmap v765: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 07 10:07:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:45.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:45 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:45 compute-1 ceph-mon[80077]: pgmap v766: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 07 10:07:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:45 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c0018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:46 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:46.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:47.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:47 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3880016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:47 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:07:47 compute-1 ceph-mon[80077]: pgmap v767: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Dec 07 10:07:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:48 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c0018b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:48.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:49.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:49 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:49 compute-1 podman[234095]: 2025-12-07 10:07:49.600923487 +0000 UTC m=+0.096257903 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:07:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:49 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:50 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff388002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:50.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:51.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:51 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:51 compute-1 ceph-mon[80077]: pgmap v768: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Dec 07 10:07:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:51 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:52 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:52.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:52 compute-1 ceph-mon[80077]: pgmap v769: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 71 KiB/s wr, 7 op/s
Dec 07 10:07:52 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1483183754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:07:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:07:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:07:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:53.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:07:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:53 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff388002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:53 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:54 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:54.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:54 compute-1 ceph-mon[80077]: pgmap v770: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 07 10:07:54 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:07:54.808 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:07:54 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:07:54.810 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:07:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:55.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:55 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:55 compute-1 ceph-mon[80077]: pgmap v771: 337 pgs: 337 active+clean; 155 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 31 op/s
Dec 07 10:07:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:55 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:56 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:07:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:56.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:07:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:57.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:57 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:57 compute-1 podman[234121]: 2025-12-07 10:07:57.577433487 +0000 UTC m=+0.070684876 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.657650) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102077657695, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 982, "num_deletes": 257, "total_data_size": 2177446, "memory_usage": 2208768, "flush_reason": "Manual Compaction"}
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102077672965, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 1419105, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24365, "largest_seqno": 25342, "table_properties": {"data_size": 1414600, "index_size": 2093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9685, "raw_average_key_size": 18, "raw_value_size": 1405427, "raw_average_value_size": 2734, "num_data_blocks": 93, "num_entries": 514, "num_filter_entries": 514, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102008, "oldest_key_time": 1765102008, "file_creation_time": 1765102077, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 15377 microseconds, and 6447 cpu microseconds.
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.673028) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 1419105 bytes OK
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.673053) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.675584) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.675605) EVENT_LOG_v1 {"time_micros": 1765102077675599, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.675654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2172535, prev total WAL file size 2172535, number of live WAL files 2.
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.676599) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(1385KB)], [45(12MB)]
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102077676659, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 14033317, "oldest_snapshot_seqno": -1}
Dec 07 10:07:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:07:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:57 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5511 keys, 13864788 bytes, temperature: kUnknown
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102077861514, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13864788, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13827465, "index_size": 22429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 140816, "raw_average_key_size": 25, "raw_value_size": 13727329, "raw_average_value_size": 2490, "num_data_blocks": 915, "num_entries": 5511, "num_filter_entries": 5511, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765102077, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.863312) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13864788 bytes
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.865170) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 75.9 rd, 75.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.0 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(19.7) write-amplify(9.8) OK, records in: 6043, records dropped: 532 output_compression: NoCompression
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.865187) EVENT_LOG_v1 {"time_micros": 1765102077865179, "job": 26, "event": "compaction_finished", "compaction_time_micros": 184974, "compaction_time_cpu_micros": 46348, "output_level": 6, "num_output_files": 1, "total_output_size": 13864788, "num_input_records": 6043, "num_output_records": 5511, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102077865772, "job": 26, "event": "table_file_deletion", "file_number": 47}
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102077868031, "job": 26, "event": "table_file_deletion", "file_number": 45}
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.676523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.868147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.868153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.868154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.868156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:07:57 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:07:57.868157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:07:58 compute-1 ceph-mon[80077]: pgmap v772: 337 pgs: 337 active+clean; 155 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 30 op/s
Dec 07 10:07:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3018870025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:07:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:07:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3262495535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:07:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:58 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff388003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100758 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:07:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:07:58.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:07:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:07:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:07:59.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:07:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:59 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:07:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:07:59 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003dd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:00 compute-1 ceph-mon[80077]: pgmap v773: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 07 10:08:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:00 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:00.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:01.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:01 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff388003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:01 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:01 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:08:01.812 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:08:02 compute-1 ceph-mon[80077]: pgmap v774: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 07 10:08:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:02 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:02.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:02 compute-1 sudo[234143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:08:02 compute-1 sudo[234143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:08:02 compute-1 sudo[234143]: pam_unix(sudo:session): session closed for user root
Dec 07 10:08:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:08:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:03.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/812548690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:08:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/812548690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:08:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:03 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:03 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:04 compute-1 ceph-mon[80077]: pgmap v775: 337 pgs: 337 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 07 10:08:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:04 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 07 10:08:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:04.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 07 10:08:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:05.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:05 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:05 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:06 compute-1 ceph-mon[80077]: pgmap v776: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 07 10:08:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:06.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:06 : epoch 693551bd : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:08:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:08:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:07.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:08:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:07 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:08:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:07 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:08 compute-1 ceph-mon[80077]: pgmap v777: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 491 KiB/s wr, 78 op/s
Dec 07 10:08:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:08 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff388003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:08.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:09.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:09 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:09 : epoch 693551bd : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:08:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:09 : epoch 693551bd : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:08:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:09 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:10 compute-1 ceph-mon[80077]: pgmap v778: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 493 KiB/s wr, 78 op/s
Dec 07 10:08:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:10 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8001300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:10.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:11.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:11 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff390000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:11 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c001330 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:12 compute-1 ceph-mon[80077]: pgmap v779: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Dec 07 10:08:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:12 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:12.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:12 : epoch 693551bd : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:08:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:08:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:13.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:08:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:13 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8001300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:13 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3900016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:14 compute-1 ceph-mon[80077]: pgmap v780: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Dec 07 10:08:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3984713768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:14 compute-1 nova_compute[230488]: 2025-12-07 10:08:14.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:08:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:14 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c001330 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:14.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:15.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2670105863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:15 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:15 compute-1 podman[234177]: 2025-12-07 10:08:15.644426254 +0000 UTC m=+0.136431878 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 07 10:08:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:15 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8001300 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:16 compute-1 ceph-mon[80077]: pgmap v781: 337 pgs: 337 active+clean; 168 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 147 KiB/s wr, 87 op/s
Dec 07 10:08:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1758444221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/887756406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3826204743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.271 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.271 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.302 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.303 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.303 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.304 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.304 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:08:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:16 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3900016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:08:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:16.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:08:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:08:16 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1873112236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.814 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.994 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.995 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5211MB free_disk=59.92041015625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.996 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:08:16 compute-1 nova_compute[230488]: 2025-12-07 10:08:16.996 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:08:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:17.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:17 compute-1 nova_compute[230488]: 2025-12-07 10:08:17.131 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:08:17 compute-1 nova_compute[230488]: 2025-12-07 10:08:17.132 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:08:17 compute-1 nova_compute[230488]: 2025-12-07 10:08:17.147 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:08:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1873112236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:17 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c001330 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:08:17 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4188782547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:17 compute-1 nova_compute[230488]: 2025-12-07 10:08:17.636 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:08:17 compute-1 nova_compute[230488]: 2025-12-07 10:08:17.643 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:08:17 compute-1 nova_compute[230488]: 2025-12-07 10:08:17.660 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:08:17 compute-1 nova_compute[230488]: 2025-12-07 10:08:17.663 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:08:17 compute-1 nova_compute[230488]: 2025-12-07 10:08:17.663 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:08:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:08:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:17 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:18 compute-1 ceph-mon[80077]: pgmap v782: 337 pgs: 337 active+clean; 168 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 134 KiB/s wr, 13 op/s
Dec 07 10:08:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/4188782547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:18 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100818 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:08:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:18.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:18 compute-1 nova_compute[230488]: 2025-12-07 10:08:18.658 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:08:18 compute-1 nova_compute[230488]: 2025-12-07 10:08:18.659 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:08:18 compute-1 nova_compute[230488]: 2025-12-07 10:08:18.677 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:08:18 compute-1 nova_compute[230488]: 2025-12-07 10:08:18.677 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:08:18 compute-1 nova_compute[230488]: 2025-12-07 10:08:18.678 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:08:18 compute-1 nova_compute[230488]: 2025-12-07 10:08:18.696 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:08:18 compute-1 nova_compute[230488]: 2025-12-07 10:08:18.697 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:08:18 compute-1 nova_compute[230488]: 2025-12-07 10:08:18.697 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:08:18 compute-1 nova_compute[230488]: 2025-12-07 10:08:18.697 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:08:18 compute-1 sshd-session[234249]: Invalid user mysql from 104.248.193.130 port 38980
Dec 07 10:08:19 compute-1 sshd-session[234249]: Connection closed by invalid user mysql 104.248.193.130 port 38980 [preauth]
Dec 07 10:08:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:19.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:19 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3900016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:19 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c001330 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:20 compute-1 ceph-mon[80077]: pgmap v783: 337 pgs: 337 active+clean; 199 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Dec 07 10:08:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:20 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:20.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:20 compute-1 podman[234252]: 2025-12-07 10:08:20.610551635 +0000 UTC m=+0.102557695 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:08:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:21.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:21 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:21 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff390002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:22 compute-1 ceph-mon[80077]: pgmap v784: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 07 10:08:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:22 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:22.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:22 compute-1 sudo[234274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:08:22 compute-1 sudo[234274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:08:22 compute-1 sudo[234274]: pam_unix(sudo:session): session closed for user root
Dec 07 10:08:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:08:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:23.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:23 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:23 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:24 compute-1 ceph-mon[80077]: pgmap v785: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 07 10:08:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:24 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff390002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:24.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:25.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:25 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:25 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:26 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:26 compute-1 ceph-mon[80077]: pgmap v786: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 07 10:08:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:08:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:26.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:08:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:27.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:27 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff390002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:27 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/241329490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:08:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:08:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:27 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:28 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:28 compute-1 ceph-mon[80077]: pgmap v787: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Dec 07 10:08:28 compute-1 podman[234302]: 2025-12-07 10:08:28.594698752 +0000 UTC m=+0.094913206 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:08:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:08:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:28.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:08:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:29.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:29 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:29 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff390003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:30 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:30 compute-1 ceph-mon[80077]: pgmap v788: 337 pgs: 337 active+clean; 132 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.0 MiB/s wr, 72 op/s
Dec 07 10:08:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:30.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:31.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:31 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:31 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:32 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff390003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:32 compute-1 ceph-mon[80077]: pgmap v789: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 63 KiB/s wr, 39 op/s
Dec 07 10:08:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:32.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:08:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:08:32 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/419432582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:33.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:33 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:33 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/419432582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:33 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:34 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:34 compute-1 ceph-mon[80077]: pgmap v790: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 17 KiB/s wr, 28 op/s
Dec 07 10:08:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:34.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:35.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:35 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff390003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:35 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:35 compute-1 sudo[234324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:08:35 compute-1 sudo[234324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:08:35 compute-1 sudo[234324]: pam_unix(sudo:session): session closed for user root
Dec 07 10:08:36 compute-1 sudo[234349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:08:36 compute-1 sudo[234349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:08:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:36 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:36 compute-1 ceph-mon[80077]: pgmap v791: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 18 KiB/s wr, 56 op/s
Dec 07 10:08:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:36.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:36 compute-1 sudo[234349]: pam_unix(sudo:session): session closed for user root
Dec 07 10:08:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:37.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:37 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:08:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:08:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:08:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:08:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:08:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:08:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:08:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:08:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:37 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff390003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:38 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:38 compute-1 ceph-mon[80077]: pgmap v792: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Dec 07 10:08:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:38.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:08:38.642 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:08:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:08:38.643 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:08:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:08:38.643 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:08:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:08:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:39.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:08:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:39 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff38c003e40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:39 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:40 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff390003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:40 compute-1 ceph-mon[80077]: pgmap v793: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.3 KiB/s wr, 55 op/s
Dec 07 10:08:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:40.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:41.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:41 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:41 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:42 compute-1 sudo[234411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:08:42 compute-1 sudo[234411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:08:42 compute-1 sudo[234411]: pam_unix(sudo:session): session closed for user root
Dec 07 10:08:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:42 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:08:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:42.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:08:42 compute-1 ceph-mon[80077]: pgmap v794: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 36 op/s
Dec 07 10:08:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:08:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:08:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:08:42 compute-1 sudo[234437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:08:42 compute-1 sudo[234437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:08:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:08:42 compute-1 sudo[234437]: pam_unix(sudo:session): session closed for user root
Dec 07 10:08:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:43.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:43 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:43 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:44 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:44.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:44 compute-1 ceph-mon[80077]: pgmap v795: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 07 10:08:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:45.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:45 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:46 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:46 compute-1 ceph-mon[80077]: pgmap v796: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 07 10:08:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:46 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c003110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:46 compute-1 podman[234464]: 2025-12-07 10:08:46.606004996 +0000 UTC m=+0.100043557 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 07 10:08:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:46.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:47.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:47 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8001670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:08:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:47 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:48 compute-1 ceph-mon[80077]: pgmap v797: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:08:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:48 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:48.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:49.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:49 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:49 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8002770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:50 compute-1 ceph-mon[80077]: pgmap v798: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:08:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:50 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:50.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:51.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:51 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff394002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:51 compute-1 podman[234492]: 2025-12-07 10:08:51.604091387 +0000 UTC m=+0.097348444 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 07 10:08:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:51 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:52 compute-1 ceph-mon[80077]: pgmap v799: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 07 10:08:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:52 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8002770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:52.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:08:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:53.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:53 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:53 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3b8009190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:54 compute-1 ceph-mon[80077]: pgmap v800: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:08:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:54 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff39c0044a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:54.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:55.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:55 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8002770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:08:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1236316415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:08:55 compute-1 kernel: ganesha.nfsd[234409]: segfault at 50 ip 00007ff46286e32e sp 00007ff41affc210 error 4 in libntirpc.so.5.8[7ff462853000+2c000] likely on CPU 5 (core 0, socket 5)
Dec 07 10:08:55 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 10:08:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[233661]: 07/12/2025 10:08:55 : epoch 693551bd : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3a8002770 fd 39 proxy ignored for local
Dec 07 10:08:55 compute-1 systemd[1]: Started Process Core Dump (PID 234515/UID 0).
Dec 07 10:08:56 compute-1 ceph-mon[80077]: pgmap v801: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 07 10:08:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:56.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:57 compute-1 systemd-coredump[234516]: Process 233667 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 61:
                                                    #0  0x00007ff46286e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 10:08:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:08:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:57.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:08:57 compute-1 systemd[1]: systemd-coredump@14-234515-0.service: Deactivated successfully.
Dec 07 10:08:57 compute-1 systemd[1]: systemd-coredump@14-234515-0.service: Consumed 1.230s CPU time.
Dec 07 10:08:57 compute-1 podman[234523]: 2025-12-07 10:08:57.270163404 +0000 UTC m=+0.029121495 container died 0671de3c02ceb46f10ee8ba36b1872130586741104affd1ac3189424b44908b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 07 10:08:57 compute-1 systemd[1]: var-lib-containers-storage-overlay-6c4fa6211d7ba55e6442446b56225ceaa3a0ee00aa882ba53f63877749fed254-merged.mount: Deactivated successfully.
Dec 07 10:08:57 compute-1 podman[234523]: 2025-12-07 10:08:57.320921007 +0000 UTC m=+0.079879058 container remove 0671de3c02ceb46f10ee8ba36b1872130586741104affd1ac3189424b44908b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 07 10:08:57 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 10:08:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:08:57 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 10:08:57 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.779s CPU time.
Dec 07 10:08:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:08:58 compute-1 ceph-mon[80077]: pgmap v802: 337 pgs: 337 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:08:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:08:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:08:58.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:08:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:08:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:08:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:08:59.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:08:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2846118594' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:08:59 compute-1 podman[234568]: 2025-12-07 10:08:59.605002901 +0000 UTC m=+0.097271771 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:09:00 compute-1 ceph-mon[80077]: pgmap v803: 337 pgs: 337 active+clean; 79 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 26 op/s
Dec 07 10:09:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/12811471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:09:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:00.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:01.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:01 compute-1 sshd-session[234588]: Invalid user apache from 104.248.193.130 port 34488
Dec 07 10:09:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100901 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:09:01 compute-1 sshd-session[234588]: Connection closed by invalid user apache 104.248.193.130 port 34488 [preauth]
Dec 07 10:09:02 compute-1 ceph-mon[80077]: pgmap v804: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:09:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:02.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:09:02 compute-1 sudo[234591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:09:02 compute-1 sudo[234591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:09:02 compute-1 sudo[234591]: pam_unix(sudo:session): session closed for user root
Dec 07 10:09:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:03.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/1661943840' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:09:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/1661943840' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:09:04 compute-1 ceph-mon[80077]: pgmap v805: 337 pgs: 337 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:09:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:04.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:09:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:05.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:09:06 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:09:06.392 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:09:06 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:09:06.394 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:09:06 compute-1 ceph-mon[80077]: pgmap v806: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 07 10:09:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:06.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:07.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:07 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 15.
Dec 07 10:09:07 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:09:07 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.779s CPU time.
Dec 07 10:09:07 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c...
Dec 07 10:09:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:09:08 compute-1 podman[234668]: 2025-12-07 10:09:08.031104749 +0000 UTC m=+0.049954172 container create a422a4f40ea835b8da4048d5c6e5a3cf908cd1d7f228959b707c041d4c46cfdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 10:09:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0377e90054d05f14277c57dc4de15e1f9095360168493bbd077db3ab8ecc7b1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 07 10:09:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0377e90054d05f14277c57dc4de15e1f9095360168493bbd077db3ab8ecc7b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 10:09:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0377e90054d05f14277c57dc4de15e1f9095360168493bbd077db3ab8ecc7b1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:09:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0377e90054d05f14277c57dc4de15e1f9095360168493bbd077db3ab8ecc7b1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.jddrlu-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 07 10:09:08 compute-1 podman[234668]: 2025-12-07 10:09:08.088874284 +0000 UTC m=+0.107723697 container init a422a4f40ea835b8da4048d5c6e5a3cf908cd1d7f228959b707c041d4c46cfdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 07 10:09:08 compute-1 podman[234668]: 2025-12-07 10:09:08.004220137 +0000 UTC m=+0.023069610 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 10:09:08 compute-1 podman[234668]: 2025-12-07 10:09:08.10528266 +0000 UTC m=+0.124132043 container start a422a4f40ea835b8da4048d5c6e5a3cf908cd1d7f228959b707c041d4c46cfdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 07 10:09:08 compute-1 bash[234668]: a422a4f40ea835b8da4048d5c6e5a3cf908cd1d7f228959b707c041d4c46cfdc
Dec 07 10:09:08 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:09:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:08 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 07 10:09:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:08 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 07 10:09:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:08 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 07 10:09:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:08 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 07 10:09:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:08 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 07 10:09:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:08 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 07 10:09:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:08 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 07 10:09:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:08 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:09:08 compute-1 ceph-mon[80077]: pgmap v807: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 07 10:09:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:08.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:09.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:10 compute-1 ceph-mon[80077]: pgmap v808: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.641933) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102150642010, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 998, "num_deletes": 251, "total_data_size": 2045194, "memory_usage": 2065632, "flush_reason": "Manual Compaction"}
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Dec 07 10:09:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:10.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102150661207, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1348967, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25347, "largest_seqno": 26340, "table_properties": {"data_size": 1344629, "index_size": 1990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10111, "raw_average_key_size": 19, "raw_value_size": 1335707, "raw_average_value_size": 2608, "num_data_blocks": 88, "num_entries": 512, "num_filter_entries": 512, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102078, "oldest_key_time": 1765102078, "file_creation_time": 1765102150, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 19332 microseconds, and 7002 cpu microseconds.
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.661277) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1348967 bytes OK
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.661302) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.663567) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.663588) EVENT_LOG_v1 {"time_micros": 1765102150663581, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.663606) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2040261, prev total WAL file size 2040261, number of live WAL files 2.
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.664605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(1317KB)], [48(13MB)]
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102150664677, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 15213755, "oldest_snapshot_seqno": -1}
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5507 keys, 12994172 bytes, temperature: kUnknown
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102150806877, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 12994172, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12957704, "index_size": 21574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 141415, "raw_average_key_size": 25, "raw_value_size": 12858343, "raw_average_value_size": 2334, "num_data_blocks": 876, "num_entries": 5507, "num_filter_entries": 5507, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765102150, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.807229) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12994172 bytes
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.809008) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.9 rd, 91.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 13.2 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(20.9) write-amplify(9.6) OK, records in: 6023, records dropped: 516 output_compression: NoCompression
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.809036) EVENT_LOG_v1 {"time_micros": 1765102150809023, "job": 28, "event": "compaction_finished", "compaction_time_micros": 142316, "compaction_time_cpu_micros": 28342, "output_level": 6, "num_output_files": 1, "total_output_size": 12994172, "num_input_records": 6023, "num_output_records": 5507, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102150809571, "job": 28, "event": "table_file_deletion", "file_number": 50}
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102150813879, "job": 28, "event": "table_file_deletion", "file_number": 48}
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.664492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.813939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.813945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.813948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.813950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:09:10 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:09:10.813953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:09:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:11.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:12 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:09:12.396 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:09:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:12.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:12 compute-1 ceph-mon[80077]: pgmap v809: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 313 KiB/s wr, 75 op/s
Dec 07 10:09:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:09:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:09:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:13.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:14 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:09:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:14 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:09:14 compute-1 nova_compute[230488]: 2025-12-07 10:09:14.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:09:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:14.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:14 compute-1 ceph-mon[80077]: pgmap v810: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 07 10:09:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:15.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:15 compute-1 ceph-mon[80077]: pgmap v811: 337 pgs: 337 active+clean; 109 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.309 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.310 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.310 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.311 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.311 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:09:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:16.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3691255036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:09:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3932531656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:09:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:09:16 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2751958954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.811 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.952 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.953 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5269MB free_disk=59.943607330322266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.953 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:09:16 compute-1 nova_compute[230488]: 2025-12-07 10:09:16.954 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:09:17 compute-1 nova_compute[230488]: 2025-12-07 10:09:17.029 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:09:17 compute-1 nova_compute[230488]: 2025-12-07 10:09:17.030 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:09:17 compute-1 nova_compute[230488]: 2025-12-07 10:09:17.052 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:09:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:17.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:09:17 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2523068657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:09:17 compute-1 nova_compute[230488]: 2025-12-07 10:09:17.511 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:09:17 compute-1 nova_compute[230488]: 2025-12-07 10:09:17.519 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:09:17 compute-1 nova_compute[230488]: 2025-12-07 10:09:17.534 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:09:17 compute-1 nova_compute[230488]: 2025-12-07 10:09:17.537 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:09:17 compute-1 nova_compute[230488]: 2025-12-07 10:09:17.537 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:09:17 compute-1 podman[234773]: 2025-12-07 10:09:17.588203589 +0000 UTC m=+0.093891729 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 07 10:09:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2751958954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:09:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/281320609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:09:17 compute-1 ceph-mon[80077]: pgmap v812: 337 pgs: 337 active+clean; 109 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 246 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Dec 07 10:09:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/487612971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:09:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2523068657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:09:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:09:18 compute-1 nova_compute[230488]: 2025-12-07 10:09:18.537 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:09:18 compute-1 nova_compute[230488]: 2025-12-07 10:09:18.538 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:09:18 compute-1 nova_compute[230488]: 2025-12-07 10:09:18.539 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:09:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:18.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:18 compute-1 nova_compute[230488]: 2025-12-07 10:09:18.817 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:09:18 compute-1 nova_compute[230488]: 2025-12-07 10:09:18.818 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:09:18 compute-1 nova_compute[230488]: 2025-12-07 10:09:18.819 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:09:18 compute-1 nova_compute[230488]: 2025-12-07 10:09:18.820 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:09:18 compute-1 nova_compute[230488]: 2025-12-07 10:09:18.820 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:09:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:19.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:20 compute-1 ceph-mon[80077]: pgmap v813: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 07 10:09:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9bc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:20 compute-1 nova_compute[230488]: 2025-12-07 10:09:20.547 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:09:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:20.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:21.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:21 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:21 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:22 compute-1 ceph-mon[80077]: pgmap v814: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec 07 10:09:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:22 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:22 compute-1 podman[234819]: 2025-12-07 10:09:22.583479534 +0000 UTC m=+0.080177396 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 07 10:09:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:22.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:09:23 compute-1 sudo[234840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:09:23 compute-1 sudo[234840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:09:23 compute-1 sudo[234840]: pam_unix(sudo:session): session closed for user root
Dec 07 10:09:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:23.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:23 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa994000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100923 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:09:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:23 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa990000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:24 compute-1 ceph-mon[80077]: pgmap v815: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec 07 10:09:24 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:24 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:24.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:25.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:25 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8001910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:25 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:25 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8001910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:26 compute-1 ceph-mon[80077]: pgmap v816: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 07 10:09:26 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:26 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa990001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:26.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:27.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:27 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:09:27 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:27 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa994001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:28 compute-1 ceph-mon[80077]: pgmap v817: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 82 KiB/s wr, 20 op/s
Dec 07 10:09:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:09:28 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:28 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8001910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:28.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:29.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:29 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa990001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:29 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:30 compute-1 ceph-mon[80077]: pgmap v818: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 85 KiB/s wr, 20 op/s
Dec 07 10:09:30 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:30 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9940023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:30 compute-1 podman[234869]: 2025-12-07 10:09:30.586974436 +0000 UTC m=+0.083766293 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 07 10:09:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:09:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:30.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:09:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:31.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:31 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8001910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:31 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:31 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa990001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:32 compute-1 ceph-mon[80077]: pgmap v819: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 15 KiB/s wr, 1 op/s
Dec 07 10:09:32 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:32 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:32.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:09:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:09:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:33.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:09:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:33 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9940023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:33 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:33 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:34 compute-1 ceph-mon[80077]: pgmap v820: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 15 KiB/s wr, 1 op/s
Dec 07 10:09:34 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:34 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa990002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:34.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:35.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:35 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:35 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:35 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9940023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:36 compute-1 ceph-mon[80077]: pgmap v821: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 16 KiB/s wr, 2 op/s
Dec 07 10:09:36 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:36 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003190 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:36.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100937 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:09:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:37.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:37 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa990002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:09:37 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:37 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:38 compute-1 ceph-mon[80077]: pgmap v822: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 7.9 KiB/s rd, 3.3 KiB/s wr, 1 op/s
Dec 07 10:09:38 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:38 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa994003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:09:38.643 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:09:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:09:38.644 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:09:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:09:38.644 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:09:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:38.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:09:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:39.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:09:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:39 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:39 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:39 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa990002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:40 compute-1 ceph-mon[80077]: pgmap v823: 337 pgs: 337 active+clean; 58 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.5 KiB/s wr, 28 op/s
Dec 07 10:09:40 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3510846299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:09:40 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:40 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:40.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:41.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:41 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa994003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:41 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:41 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:42 compute-1 sudo[234894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:09:42 compute-1 sudo[234894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:09:42 compute-1 sudo[234894]: pam_unix(sudo:session): session closed for user root
Dec 07 10:09:42 compute-1 sudo[234919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:09:42 compute-1 sudo[234919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:09:42 compute-1 ceph-mon[80077]: pgmap v824: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Dec 07 10:09:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:09:42 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:42 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa990003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:42.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:09:42 compute-1 sudo[234919]: pam_unix(sudo:session): session closed for user root
Dec 07 10:09:43 compute-1 sudo[234978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:09:43 compute-1 sudo[234978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:09:43 compute-1 sudo[234978]: pam_unix(sudo:session): session closed for user root
Dec 07 10:09:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:43.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:43 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:09:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:09:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:09:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:09:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:09:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:09:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:09:43 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:43 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa994003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:43 compute-1 sshd-session[235003]: Invalid user postgres from 104.248.193.130 port 58424
Dec 07 10:09:43 compute-1 sshd-session[235003]: Connection closed by invalid user postgres 104.248.193.130 port 58424 [preauth]
Dec 07 10:09:44 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:44 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:44 compute-1 ceph-mon[80077]: pgmap v825: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 07 10:09:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:44.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:45.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:45 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa990003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:45 compute-1 ceph-mon[80077]: pgmap v826: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 07 10:09:45 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:45 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:46 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa994003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:46 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:46 : epoch 69355244 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 07 10:09:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:46.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:47.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:47 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:09:47 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:47 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa990003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:48 compute-1 ceph-mon[80077]: pgmap v827: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 07 10:09:48 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:48 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:48 compute-1 podman[235008]: 2025-12-07 10:09:48.668727263 +0000 UTC m=+0.172505790 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 07 10:09:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:48.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:09:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:49.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:09:49 compute-1 sudo[235035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:09:49 compute-1 sudo[235035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:09:49 compute-1 sudo[235035]: pam_unix(sudo:session): session closed for user root
Dec 07 10:09:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:49 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa994003730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:49 : epoch 69355244 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 07 10:09:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:49 : epoch 69355244 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 07 10:09:49 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:49 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:50 compute-1 ceph-mon[80077]: pgmap v828: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Dec 07 10:09:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:09:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:09:50 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:50 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa990003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:50.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:51.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:51 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:51 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:51 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa994003730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:52 compute-1 ceph-mon[80077]: pgmap v829: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:09:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:52 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:52 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:52 : epoch 69355244 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 07 10:09:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:52.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:09:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:53.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:53 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa988000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:53 compute-1 podman[235065]: 2025-12-07 10:09:53.595191172 +0000 UTC m=+0.086013154 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 07 10:09:53 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:53 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:54 compute-1 ceph-mon[80077]: pgmap v830: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:09:54 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:54 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa99c001120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:54.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:09:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:55.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:09:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:55 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:55 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:55 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9880016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:56 compute-1 ceph-mon[80077]: pgmap v831: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:09:56 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:56 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:09:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:56.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:09:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:57.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:57 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa99c001c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:09:57 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:57 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:58 compute-1 ceph-mon[80077]: pgmap v832: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 07 10:09:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:09:58 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:58 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9880016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:09:58.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/100959 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 07 10:09:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:09:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:09:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:09:59.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:09:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:59 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:09:59 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:09:59 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa99c001c20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:00 compute-1 ceph-mon[80077]: pgmap v833: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Dec 07 10:10:00 compute-1 ceph-mon[80077]: overall HEALTH_OK
Dec 07 10:10:00 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:00 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:00.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:01.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:01 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9880016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:01 compute-1 podman[235089]: 2025-12-07 10:10:01.567912659 +0000 UTC m=+0.071007167 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 07 10:10:01 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:01 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:02 compute-1 ceph-mon[80077]: pgmap v834: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Dec 07 10:10:02 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/58678575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:10:02 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:02 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa99c002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:02.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:10:03 compute-1 sudo[235109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:10:03 compute-1 sudo[235109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:10:03 compute-1 sudo[235109]: pam_unix(sudo:session): session closed for user root
Dec 07 10:10:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2061580697' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:10:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2061580697' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:10:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:03.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:03 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:03 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa988002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:04 compute-1 ceph-mon[80077]: pgmap v835: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Dec 07 10:10:04 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:04 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:04.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:05.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:05 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa99c002930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:05 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:05 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:06 compute-1 ceph-mon[80077]: pgmap v836: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:10:06 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1637866266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:10:06 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1478348342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:10:06 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:06 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa988002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:06.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:07.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:07 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:10:07 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:07 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa99c003640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:08 compute-1 ceph-mon[80077]: pgmap v837: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:10:08 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:08 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:08.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:08 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:10:08.736 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:10:08 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:10:08.737 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:10:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:09.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:09 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa988002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:09 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:09 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:10 compute-1 ceph-mon[80077]: pgmap v838: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 07 10:10:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:10 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa99c003640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:10.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:11.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:11 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:11 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:11 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa988003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:12 compute-1 ceph-mon[80077]: pgmap v839: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Dec 07 10:10:12 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:12 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa99c003640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:12.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:10:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:13.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:10:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:13 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:13 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:13 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:14 compute-1 nova_compute[230488]: 2025-12-07 10:10:14.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:10:14 compute-1 ceph-mon[80077]: pgmap v840: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Dec 07 10:10:14 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:14 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa988003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:14.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:15 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa99c003640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:15 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:15 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.296 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.296 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.297 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.297 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.298 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:10:16 compute-1 ceph-mon[80077]: pgmap v841: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 07 10:10:16 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:16 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:16.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:10:16 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2801595769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.766 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.977 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.978 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5223MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.978 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:10:16 compute-1 nova_compute[230488]: 2025-12-07 10:10:16.979 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:10:17 compute-1 nova_compute[230488]: 2025-12-07 10:10:17.067 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:10:17 compute-1 nova_compute[230488]: 2025-12-07 10:10:17.067 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:10:17 compute-1 nova_compute[230488]: 2025-12-07 10:10:17.090 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:10:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:17.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:17 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa988003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2801595769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:10:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1613278563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:10:17 compute-1 ceph-mon[80077]: pgmap v842: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 07 10:10:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:10:17 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2154677551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:10:17 compute-1 nova_compute[230488]: 2025-12-07 10:10:17.494 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:10:17 compute-1 nova_compute[230488]: 2025-12-07 10:10:17.501 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:10:17 compute-1 nova_compute[230488]: 2025-12-07 10:10:17.516 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:10:17 compute-1 nova_compute[230488]: 2025-12-07 10:10:17.519 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:10:17 compute-1 nova_compute[230488]: 2025-12-07 10:10:17.519 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:10:17 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:10:17.739 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:10:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:10:17 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:17 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa99c003640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2154677551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:10:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1852959188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:10:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2228523150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:10:18 compute-1 nova_compute[230488]: 2025-12-07 10:10:18.520 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:10:18 compute-1 nova_compute[230488]: 2025-12-07 10:10:18.520 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:10:18 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:18 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ec0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:18.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:19 compute-1 nova_compute[230488]: 2025-12-07 10:10:19.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:10:19 compute-1 nova_compute[230488]: 2025-12-07 10:10:19.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:10:19 compute-1 nova_compute[230488]: 2025-12-07 10:10:19.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:10:19 compute-1 nova_compute[230488]: 2025-12-07 10:10:19.301 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:10:19 compute-1 nova_compute[230488]: 2025-12-07 10:10:19.301 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:10:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:10:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:19.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:10:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:19 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:19 compute-1 ceph-mon[80077]: pgmap v843: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 07 10:10:19 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/148855949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:10:19 compute-1 podman[235186]: 2025-12-07 10:10:19.673056399 +0000 UTC m=+0.163514577 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 07 10:10:19 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:19 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa988003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:20 compute-1 nova_compute[230488]: 2025-12-07 10:10:20.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:10:20 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:20 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa988003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:20.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:21 compute-1 nova_compute[230488]: 2025-12-07 10:10:21.264 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:10:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:21.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:21 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa99c003640 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:21 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:21 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9a8003ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:22 compute-1 ceph-mon[80077]: pgmap v844: 337 pgs: 337 active+clean; 92 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 349 KiB/s wr, 69 op/s
Dec 07 10:10:22 compute-1 nova_compute[230488]: 2025-12-07 10:10:22.288 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:10:22 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:22 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa9b40040b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 07 10:10:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:22.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:10:23 compute-1 sudo[235215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:10:23 compute-1 sudo[235215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:10:23 compute-1 sudo[235215]: pam_unix(sudo:session): session closed for user root
Dec 07 10:10:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:23.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:23 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu[234684]: 07/12/2025 10:10:23 : epoch 69355244 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa988003c10 fd 48 proxy ignored for local
Dec 07 10:10:23 compute-1 kernel: ganesha.nfsd[235063]: segfault at 50 ip 00007faa67c0b32e sp 00007faa317f9210 error 4 in libntirpc.so.5.8[7faa67bf0000+2c000] likely on CPU 1 (core 0, socket 1)
Dec 07 10:10:23 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 07 10:10:23 compute-1 systemd[1]: Started Process Core Dump (PID 235240/UID 0).
Dec 07 10:10:24 compute-1 ceph-mon[80077]: pgmap v845: 337 pgs: 337 active+clean; 92 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 337 KiB/s wr, 56 op/s
Dec 07 10:10:24 compute-1 systemd-coredump[235241]: Process 234688 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 57:
                                                    #0  0x00007faa67c0b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Dec 07 10:10:24 compute-1 podman[235243]: 2025-12-07 10:10:24.575497615 +0000 UTC m=+0.079631651 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 07 10:10:24 compute-1 systemd[1]: systemd-coredump@15-235240-0.service: Deactivated successfully.
Dec 07 10:10:24 compute-1 systemd[1]: systemd-coredump@15-235240-0.service: Consumed 1.113s CPU time.
Dec 07 10:10:24 compute-1 podman[235267]: 2025-12-07 10:10:24.697749465 +0000 UTC m=+0.031864439 container died a422a4f40ea835b8da4048d5c6e5a3cf908cd1d7f228959b707c041d4c46cfdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 10:10:24 compute-1 systemd[1]: var-lib-containers-storage-overlay-d0377e90054d05f14277c57dc4de15e1f9095360168493bbd077db3ab8ecc7b1-merged.mount: Deactivated successfully.
Dec 07 10:10:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:24.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:24 compute-1 podman[235267]: 2025-12-07 10:10:24.754932043 +0000 UTC m=+0.089046947 container remove a422a4f40ea835b8da4048d5c6e5a3cf908cd1d7f228959b707c041d4c46cfdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-nfs-cephfs-0-0-compute-1-jddrlu, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 07 10:10:24 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Main process exited, code=exited, status=139/n/a
Dec 07 10:10:25 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 10:10:25 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.649s CPU time.
Dec 07 10:10:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:25.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:26 compute-1 ceph-mon[80077]: pgmap v846: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 115 op/s
Dec 07 10:10:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:26.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:27 compute-1 sshd-session[235309]: Invalid user postgres from 104.248.193.130 port 38882
Dec 07 10:10:27 compute-1 sshd-session[235309]: Connection closed by invalid user postgres 104.248.193.130 port 38882 [preauth]
Dec 07 10:10:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:27.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:10:28 compute-1 ceph-mon[80077]: pgmap v847: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 07 10:10:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:10:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:28.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:29.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:29 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/101029 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:10:30 compute-1 ceph-mon[80077]: pgmap v848: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 07 10:10:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:30.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:31.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:32 compute-1 ceph-mon[80077]: pgmap v849: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 07 10:10:32 compute-1 podman[235314]: 2025-12-07 10:10:32.583338258 +0000 UTC m=+0.077456511 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 07 10:10:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:32.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:10:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:33.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:33 compute-1 ceph-mon[80077]: pgmap v850: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Dec 07 10:10:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:34.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:35 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Scheduled restart job, restart counter is at 16.
Dec 07 10:10:35 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:10:35 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Consumed 1.649s CPU time.
Dec 07 10:10:35 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Start request repeated too quickly.
Dec 07 10:10:35 compute-1 systemd[1]: ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c@nfs.cephfs.0.0.compute-1.jddrlu.service: Failed with result 'exit-code'.
Dec 07 10:10:35 compute-1 systemd[1]: Failed to start Ceph nfs.cephfs.0.0.compute-1.jddrlu for 75f4c9fd-539a-5e17-b55a-0a12a4e2736c.
Dec 07 10:10:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:35.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:36 compute-1 ceph-mon[80077]: pgmap v851: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Dec 07 10:10:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:36.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:10:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:37.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:10:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:10:38 compute-1 ceph-mon[80077]: pgmap v852: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 07 10:10:38 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4024049272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:10:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:10:38.645 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:10:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:10:38.645 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:10:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:10:38.646 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:10:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:38.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:39.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:40 compute-1 ceph-mon[80077]: pgmap v853: 337 pgs: 337 active+clean; 158 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.6 MiB/s wr, 27 op/s
Dec 07 10:10:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:40.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:41.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:42 compute-1 ceph-mon[80077]: pgmap v854: 337 pgs: 337 active+clean; 167 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:10:42 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2516761690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:10:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:42.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:10:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:10:43 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2821274269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:10:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:43.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:43 compute-1 sudo[235338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:10:43 compute-1 sudo[235338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:10:43 compute-1 sudo[235338]: pam_unix(sudo:session): session closed for user root
Dec 07 10:10:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 10:10:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 2845 syncs, 3.89 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1953 writes, 6016 keys, 1953 commit groups, 1.0 writes per commit group, ingest: 5.65 MB, 0.01 MB/s
                                           Interval WAL: 1953 writes, 805 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 07 10:10:44 compute-1 ceph-mon[80077]: pgmap v855: 337 pgs: 337 active+clean; 167 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:10:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:44.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:45.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:46 compute-1 ceph-mon[80077]: pgmap v856: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:10:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:46.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:47.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:10:48 compute-1 ceph-mon[80077]: pgmap v857: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:10:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:48.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:49.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:49 compute-1 sudo[235366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:10:49 compute-1 sudo[235366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:10:49 compute-1 sudo[235366]: pam_unix(sudo:session): session closed for user root
Dec 07 10:10:49 compute-1 sudo[235391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 07 10:10:49 compute-1 sudo[235391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:10:49 compute-1 podman[235415]: 2025-12-07 10:10:49.877869632 +0000 UTC m=+0.123042600 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:10:50 compute-1 sudo[235391]: pam_unix(sudo:session): session closed for user root
Dec 07 10:10:50 compute-1 sudo[235463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:10:50 compute-1 sudo[235463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:10:50 compute-1 sudo[235463]: pam_unix(sudo:session): session closed for user root
Dec 07 10:10:50 compute-1 sudo[235489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:10:50 compute-1 sudo[235489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:10:50 compute-1 ceph-mon[80077]: pgmap v858: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec 07 10:10:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:10:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:10:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:10:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:10:50 compute-1 sudo[235489]: pam_unix(sudo:session): session closed for user root
Dec 07 10:10:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:50.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:50 compute-1 sudo[235546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:10:50 compute-1 sudo[235546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:10:50 compute-1 sudo[235546]: pam_unix(sudo:session): session closed for user root
Dec 07 10:10:50 compute-1 sudo[235571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 75f4c9fd-539a-5e17-b55a-0a12a4e2736c -- inventory --format=json-pretty --filter-for-batch
Dec 07 10:10:50 compute-1 sudo[235571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:10:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:51.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:51 compute-1 podman[235639]: 2025-12-07 10:10:51.434341973 +0000 UTC m=+0.060329612 container create f18c3b1d6668f0ecd10b40a389e2970c912385098f5fd7dc6c71c290fb737c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_knuth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 07 10:10:51 compute-1 systemd[1]: Started libpod-conmon-f18c3b1d6668f0ecd10b40a389e2970c912385098f5fd7dc6c71c290fb737c01.scope.
Dec 07 10:10:51 compute-1 podman[235639]: 2025-12-07 10:10:51.415303416 +0000 UTC m=+0.041291045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 10:10:51 compute-1 systemd[1]: Started libcrun container.
Dec 07 10:10:51 compute-1 podman[235639]: 2025-12-07 10:10:51.546520836 +0000 UTC m=+0.172508525 container init f18c3b1d6668f0ecd10b40a389e2970c912385098f5fd7dc6c71c290fb737c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_knuth, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 07 10:10:51 compute-1 podman[235639]: 2025-12-07 10:10:51.562743717 +0000 UTC m=+0.188731356 container start f18c3b1d6668f0ecd10b40a389e2970c912385098f5fd7dc6c71c290fb737c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 07 10:10:51 compute-1 podman[235639]: 2025-12-07 10:10:51.567347423 +0000 UTC m=+0.193335072 container attach f18c3b1d6668f0ecd10b40a389e2970c912385098f5fd7dc6c71c290fb737c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_knuth, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec 07 10:10:51 compute-1 eager_knuth[235655]: 167 167
Dec 07 10:10:51 compute-1 systemd[1]: libpod-f18c3b1d6668f0ecd10b40a389e2970c912385098f5fd7dc6c71c290fb737c01.scope: Deactivated successfully.
Dec 07 10:10:51 compute-1 conmon[235655]: conmon f18c3b1d6668f0ecd10b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f18c3b1d6668f0ecd10b40a389e2970c912385098f5fd7dc6c71c290fb737c01.scope/container/memory.events
Dec 07 10:10:51 compute-1 podman[235661]: 2025-12-07 10:10:51.623369117 +0000 UTC m=+0.032588858 container died f18c3b1d6668f0ecd10b40a389e2970c912385098f5fd7dc6c71c290fb737c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_knuth, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 07 10:10:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-d17705df390af70670c85af3d971c070faa57e80316e4a3094732113171eca64-merged.mount: Deactivated successfully.
Dec 07 10:10:51 compute-1 podman[235661]: 2025-12-07 10:10:51.666831279 +0000 UTC m=+0.076051020 container remove f18c3b1d6668f0ecd10b40a389e2970c912385098f5fd7dc6c71c290fb737c01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 07 10:10:51 compute-1 systemd[1]: libpod-conmon-f18c3b1d6668f0ecd10b40a389e2970c912385098f5fd7dc6c71c290fb737c01.scope: Deactivated successfully.
Dec 07 10:10:51 compute-1 podman[235683]: 2025-12-07 10:10:51.841075731 +0000 UTC m=+0.043015892 container create d806747e49027c7eef1f2779ed2cfe85379d3f8d077076eb169ddd0aef5b2010 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_shockley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 07 10:10:51 compute-1 systemd[1]: Started libpod-conmon-d806747e49027c7eef1f2779ed2cfe85379d3f8d077076eb169ddd0aef5b2010.scope.
Dec 07 10:10:51 compute-1 systemd[1]: Started libcrun container.
Dec 07 10:10:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd01b7136a2dac3699bd9aafc367ebc3f671915a4f05f983715d453d7778b56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 07 10:10:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd01b7136a2dac3699bd9aafc367ebc3f671915a4f05f983715d453d7778b56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 07 10:10:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd01b7136a2dac3699bd9aafc367ebc3f671915a4f05f983715d453d7778b56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 07 10:10:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd01b7136a2dac3699bd9aafc367ebc3f671915a4f05f983715d453d7778b56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 07 10:10:51 compute-1 podman[235683]: 2025-12-07 10:10:51.824201072 +0000 UTC m=+0.026141243 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 07 10:10:51 compute-1 podman[235683]: 2025-12-07 10:10:51.924280355 +0000 UTC m=+0.126220546 container init d806747e49027c7eef1f2779ed2cfe85379d3f8d077076eb169ddd0aef5b2010 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_shockley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 07 10:10:51 compute-1 podman[235683]: 2025-12-07 10:10:51.931552253 +0000 UTC m=+0.133492404 container start d806747e49027c7eef1f2779ed2cfe85379d3f8d077076eb169ddd0aef5b2010 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 07 10:10:51 compute-1 podman[235683]: 2025-12-07 10:10:51.934575765 +0000 UTC m=+0.136515966 container attach d806747e49027c7eef1f2779ed2cfe85379d3f8d077076eb169ddd0aef5b2010 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_shockley, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 07 10:10:52 compute-1 ceph-mon[80077]: pgmap v859: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 240 KiB/s wr, 74 op/s
Dec 07 10:10:52 compute-1 gracious_shockley[235700]: [
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:     {
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:         "available": false,
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:         "being_replaced": false,
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:         "ceph_device_lvm": false,
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:         "lsm_data": {},
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:         "lvs": [],
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:         "path": "/dev/sr0",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:         "rejected_reasons": [
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "Has a FileSystem",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "Insufficient space (<5GB)"
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:         ],
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:         "sys_api": {
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "actuators": null,
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "device_nodes": [
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:                 "sr0"
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             ],
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "devname": "sr0",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "human_readable_size": "482.00 KB",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "id_bus": "ata",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "model": "QEMU DVD-ROM",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "nr_requests": "2",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "parent": "/dev/sr0",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "partitions": {},
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "path": "/dev/sr0",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "removable": "1",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "rev": "2.5+",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "ro": "0",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "rotational": "1",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "sas_address": "",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "sas_device_handle": "",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "scheduler_mode": "mq-deadline",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "sectors": 0,
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "sectorsize": "2048",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "size": 493568.0,
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "support_discard": "2048",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "type": "disk",
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:             "vendor": "QEMU"
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:         }
Dec 07 10:10:52 compute-1 gracious_shockley[235700]:     }
Dec 07 10:10:52 compute-1 gracious_shockley[235700]: ]
Dec 07 10:10:52 compute-1 systemd[1]: libpod-d806747e49027c7eef1f2779ed2cfe85379d3f8d077076eb169ddd0aef5b2010.scope: Deactivated successfully.
Dec 07 10:10:52 compute-1 podman[235683]: 2025-12-07 10:10:52.735282193 +0000 UTC m=+0.937222344 container died d806747e49027c7eef1f2779ed2cfe85379d3f8d077076eb169ddd0aef5b2010 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 07 10:10:52 compute-1 systemd[1]: var-lib-containers-storage-overlay-6cd01b7136a2dac3699bd9aafc367ebc3f671915a4f05f983715d453d7778b56-merged.mount: Deactivated successfully.
Dec 07 10:10:52 compute-1 podman[235683]: 2025-12-07 10:10:52.775675752 +0000 UTC m=+0.977615903 container remove d806747e49027c7eef1f2779ed2cfe85379d3f8d077076eb169ddd0aef5b2010 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_shockley, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 07 10:10:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:52.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:52 compute-1 systemd[1]: libpod-conmon-d806747e49027c7eef1f2779ed2cfe85379d3f8d077076eb169ddd0aef5b2010.scope: Deactivated successfully.
Dec 07 10:10:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:10:52 compute-1 sudo[235571]: pam_unix(sudo:session): session closed for user root
Dec 07 10:10:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:53.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:53 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:10:53 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:10:53 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:10:53 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:10:53 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:10:53 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:10:53 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:10:53 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:10:53 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:10:53 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:10:53 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:10:53 compute-1 ceph-mon[80077]: pgmap v860: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 07 10:10:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:54.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:55.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:55 compute-1 podman[236948]: 2025-12-07 10:10:55.596411285 +0000 UTC m=+0.090364919 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 07 10:10:56 compute-1 ceph-mon[80077]: pgmap v861: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 07 10:10:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:10:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:56.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:10:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:57.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:10:58 compute-1 sudo[236970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:10:58 compute-1 sudo[236970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:10:58 compute-1 sudo[236970]: pam_unix(sudo:session): session closed for user root
Dec 07 10:10:58 compute-1 ceph-mon[80077]: pgmap v862: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.2 KiB/s wr, 74 op/s
Dec 07 10:10:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:10:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:10:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:10:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:10:58.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:10:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:10:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:10:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:10:59.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:00 compute-1 ceph-mon[80077]: pgmap v863: 337 pgs: 337 active+clean; 191 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 116 op/s
Dec 07 10:11:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:00.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:01.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:02 compute-1 ceph-mon[80077]: pgmap v864: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 593 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 07 10:11:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:02.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:11:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/3243843515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:11:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/3243843515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:11:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:03.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:03 compute-1 sudo[236998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:11:03 compute-1 sudo[236998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:11:03 compute-1 sudo[236998]: pam_unix(sudo:session): session closed for user root
Dec 07 10:11:03 compute-1 podman[237015]: 2025-12-07 10:11:03.584973098 +0000 UTC m=+0.073631535 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 07 10:11:04 compute-1 ceph-mon[80077]: pgmap v865: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 07 10:11:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:04.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:05.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:06 compute-1 ceph-mon[80077]: pgmap v866: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 07 10:11:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:06.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:07.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:11:08 compute-1 ceph-mon[80077]: pgmap v867: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 07 10:11:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:08.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:09 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2292334337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:11:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:09.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:09 compute-1 sshd-session[237045]: Invalid user postgres from 104.248.193.130 port 51584
Dec 07 10:11:09 compute-1 sshd-session[237045]: Connection closed by invalid user postgres 104.248.193.130 port 51584 [preauth]
Dec 07 10:11:10 compute-1 ceph-mon[80077]: pgmap v868: 337 pgs: 337 active+clean; 139 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Dec 07 10:11:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:10.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:11.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:12 compute-1 ceph-mon[80077]: pgmap v869: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 473 KiB/s wr, 49 op/s
Dec 07 10:11:12 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:11:12.575 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:11:12 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:11:12.577 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:11:12 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:11:12.579 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:11:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:11:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:12.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:11:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:13.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 10:11:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5268 writes, 27K keys, 5268 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s
                                           Cumulative WAL: 5268 writes, 5268 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1552 writes, 7326 keys, 1552 commit groups, 1.0 writes per commit group, ingest: 17.04 MB, 0.03 MB/s
                                           Interval WAL: 1552 writes, 1552 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     81.1      0.47              0.14        14    0.033       0      0       0.0       0.0
                                             L6      1/0   12.39 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.4     94.0     81.5      2.04              0.51        13    0.157     67K   6707       0.0       0.0
                                            Sum      1/0   12.39 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   5.4     76.5     81.4      2.51              0.65        27    0.093     67K   6707       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.3     93.9     92.6      0.80              0.22        10    0.080     29K   2581       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     94.0     81.5      2.04              0.51        13    0.157     67K   6707       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     81.4      0.47              0.14        13    0.036       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.037, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.11 MB/s write, 0.19 GB read, 0.11 MB/s read, 2.5 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563169dd350#2 capacity: 304.00 MB usage: 14.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000267 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(794,14.16 MB,4.65702%) FilterBlock(27,201.30 KB,0.0646641%) IndexBlock(27,350.92 KB,0.112729%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 07 10:11:14 compute-1 ceph-mon[80077]: pgmap v870: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 22 KiB/s wr, 29 op/s
Dec 07 10:11:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3328962706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:11:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:14.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:15.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:16 compute-1 nova_compute[230488]: 2025-12-07 10:11:16.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:11:16 compute-1 nova_compute[230488]: 2025-12-07 10:11:16.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:11:16 compute-1 nova_compute[230488]: 2025-12-07 10:11:16.304 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:11:16 compute-1 nova_compute[230488]: 2025-12-07 10:11:16.305 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:11:16 compute-1 nova_compute[230488]: 2025-12-07 10:11:16.305 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:11:16 compute-1 nova_compute[230488]: 2025-12-07 10:11:16.305 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:11:16 compute-1 nova_compute[230488]: 2025-12-07 10:11:16.306 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:11:16 compute-1 ceph-mon[80077]: pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 57 op/s
Dec 07 10:11:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:11:16 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2491850787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:11:16 compute-1 nova_compute[230488]: 2025-12-07 10:11:16.745 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:11:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:16.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:16 compute-1 nova_compute[230488]: 2025-12-07 10:11:16.945 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:11:16 compute-1 nova_compute[230488]: 2025-12-07 10:11:16.946 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5230MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:11:16 compute-1 nova_compute[230488]: 2025-12-07 10:11:16.946 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:11:16 compute-1 nova_compute[230488]: 2025-12-07 10:11:16.947 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:11:17 compute-1 nova_compute[230488]: 2025-12-07 10:11:17.018 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:11:17 compute-1 nova_compute[230488]: 2025-12-07 10:11:17.019 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:11:17 compute-1 nova_compute[230488]: 2025-12-07 10:11:17.032 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:11:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2491850787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:11:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:17.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:11:17 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1976333836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:11:17 compute-1 nova_compute[230488]: 2025-12-07 10:11:17.514 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:11:17 compute-1 nova_compute[230488]: 2025-12-07 10:11:17.524 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:11:17 compute-1 nova_compute[230488]: 2025-12-07 10:11:17.543 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:11:17 compute-1 nova_compute[230488]: 2025-12-07 10:11:17.547 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:11:17 compute-1 nova_compute[230488]: 2025-12-07 10:11:17.547 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:11:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:11:18 compute-1 ceph-mon[80077]: pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 11 KiB/s wr, 56 op/s
Dec 07 10:11:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1976333836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:11:18 compute-1 nova_compute[230488]: 2025-12-07 10:11:18.548 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:11:18 compute-1 nova_compute[230488]: 2025-12-07 10:11:18.548 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:11:18 compute-1 nova_compute[230488]: 2025-12-07 10:11:18.549 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:11:18 compute-1 nova_compute[230488]: 2025-12-07 10:11:18.549 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:11:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:18.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:19 compute-1 nova_compute[230488]: 2025-12-07 10:11:19.271 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:11:19 compute-1 nova_compute[230488]: 2025-12-07 10:11:19.272 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:11:19 compute-1 nova_compute[230488]: 2025-12-07 10:11:19.272 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:11:19 compute-1 nova_compute[230488]: 2025-12-07 10:11:19.305 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:11:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:19.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:19 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/289932820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:11:20 compute-1 ceph-mon[80077]: pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 11 KiB/s wr, 56 op/s
Dec 07 10:11:20 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1493119558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:11:20 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/73554600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:11:20 compute-1 podman[237097]: 2025-12-07 10:11:20.675997086 +0000 UTC m=+0.163765467 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 07 10:11:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:20.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:21 compute-1 nova_compute[230488]: 2025-12-07 10:11:21.268 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:11:21 compute-1 nova_compute[230488]: 2025-12-07 10:11:21.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:11:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:21.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:21 compute-1 ceph-mon[80077]: pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 07 10:11:21 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/576171414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:11:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:11:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:22.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:23.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:23 compute-1 sudo[237125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:11:23 compute-1 sudo[237125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:11:23 compute-1 sudo[237125]: pam_unix(sudo:session): session closed for user root
Dec 07 10:11:24 compute-1 ceph-mon[80077]: pgmap v875: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 07 10:11:24 compute-1 nova_compute[230488]: 2025-12-07 10:11:24.266 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:11:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:24.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:26 compute-1 ceph-mon[80077]: pgmap v876: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 07 10:11:26 compute-1 podman[237152]: 2025-12-07 10:11:26.590684548 +0000 UTC m=+0.085509597 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 07 10:11:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:26.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:27.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:11:28 compute-1 ceph-mon[80077]: pgmap v877: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:11:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:11:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:28.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:29.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:30 compute-1 ceph-mon[80077]: pgmap v878: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:11:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:30.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:31.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:32 compute-1 ceph-mon[80077]: pgmap v879: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:11:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:11:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:32.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:33.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:34 compute-1 ceph-mon[80077]: pgmap v880: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:11:34 compute-1 podman[237176]: 2025-12-07 10:11:34.617252002 +0000 UTC m=+0.114267160 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 07 10:11:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:34.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:35.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:36 compute-1 ceph-mon[80077]: pgmap v881: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 07 10:11:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:36.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:37 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1325293765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:11:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:37.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:11:38 compute-1 ceph-mon[80077]: pgmap v882: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 07 10:11:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:11:38.646 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:11:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:11:38.646 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:11:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:11:38.646 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:11:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:38.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:39.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:40 compute-1 ceph-mon[80077]: pgmap v883: 337 pgs: 337 active+clean; 76 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Dec 07 10:11:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:40.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:41.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:42 compute-1 ceph-mon[80077]: pgmap v884: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:11:42 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/905991167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:11:42 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/548017098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:11:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:11:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:42.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:11:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:43.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:43 compute-1 sudo[237199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:11:43 compute-1 sudo[237199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:11:43 compute-1 sudo[237199]: pam_unix(sudo:session): session closed for user root
Dec 07 10:11:44 compute-1 ceph-mon[80077]: pgmap v885: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:11:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:44.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:45.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:45 compute-1 ceph-mon[80077]: pgmap v886: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 273 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Dec 07 10:11:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:11:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:46.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:11:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:47.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:11:48 compute-1 ceph-mon[80077]: pgmap v887: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 273 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Dec 07 10:11:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:48.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:49.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:50 compute-1 ceph-mon[80077]: pgmap v888: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Dec 07 10:11:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:50.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:51.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:51 compute-1 podman[237232]: 2025-12-07 10:11:51.620544951 +0000 UTC m=+0.117467757 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 07 10:11:51 compute-1 sshd-session[237230]: Invalid user postgres from 104.248.193.130 port 54592
Dec 07 10:11:51 compute-1 sshd-session[237230]: Connection closed by invalid user postgres 104.248.193.130 port 54592 [preauth]
Dec 07 10:11:52 compute-1 ceph-mon[80077]: pgmap v889: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 335 KiB/s wr, 87 op/s
Dec 07 10:11:52 compute-1 sshd-session[237228]: Invalid user admin from 78.128.112.74 port 55284
Dec 07 10:11:52 compute-1 sshd-session[237228]: Connection closed by invalid user admin 78.128.112.74 port 55284 [preauth]
Dec 07 10:11:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:11:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:52.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:53.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:54 compute-1 ceph-mon[80077]: pgmap v890: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 07 10:11:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:54.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:55.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:56 compute-1 ceph-mon[80077]: pgmap v891: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Dec 07 10:11:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:11:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:56.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:11:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:11:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:57.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:11:57 compute-1 podman[237262]: 2025-12-07 10:11:57.608122695 +0000 UTC m=+0.104235737 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:11:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:11:58 compute-1 sudo[237284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:11:58 compute-1 sudo[237284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:11:58 compute-1 sudo[237284]: pam_unix(sudo:session): session closed for user root
Dec 07 10:11:58 compute-1 ceph-mon[80077]: pgmap v892: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 56 op/s
Dec 07 10:11:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:11:58 compute-1 sudo[237309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:11:58 compute-1 sudo[237309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:11:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:11:58.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:59 compute-1 sudo[237309]: pam_unix(sudo:session): session closed for user root
Dec 07 10:11:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:11:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:11:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:11:59.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:11:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:11:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:11:59 compute-1 ceph-mon[80077]: pgmap v893: 337 pgs: 337 active+clean; 111 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 78 op/s
Dec 07 10:11:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:11:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:11:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:11:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:11:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:11:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:11:59 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:12:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:00.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:01.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:02 compute-1 ceph-mon[80077]: pgmap v894: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 468 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 07 10:12:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:12:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:02.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/867328843' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:12:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/867328843' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:12:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:03.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:03 compute-1 sudo[237367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:12:03 compute-1 sudo[237367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:12:03 compute-1 sudo[237367]: pam_unix(sudo:session): session closed for user root
Dec 07 10:12:04 compute-1 ceph-mon[80077]: pgmap v895: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 07 10:12:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:04.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:05.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:05 compute-1 sudo[237393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:12:05 compute-1 sudo[237393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:12:05 compute-1 sudo[237393]: pam_unix(sudo:session): session closed for user root
Dec 07 10:12:05 compute-1 podman[237405]: 2025-12-07 10:12:05.57257137 +0000 UTC m=+0.069051879 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:12:06 compute-1 ceph-mon[80077]: pgmap v896: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 07 10:12:06 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:12:06 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:12:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:06.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:07.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:12:08 compute-1 ceph-mon[80077]: pgmap v897: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 316 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 07 10:12:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:08.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:09.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:10 compute-1 ceph-mon[80077]: pgmap v898: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 316 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 07 10:12:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:12:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:10.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:12:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:11.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:12 compute-1 ceph-mon[80077]: pgmap v899: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 859 KiB/s wr, 40 op/s
Dec 07 10:12:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:12:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:12.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:13 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:12:13.170 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:12:13 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:12:13.171 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:12:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:12:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:13.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:14 compute-1 ceph-mon[80077]: pgmap v900: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 15 KiB/s wr, 0 op/s
Dec 07 10:12:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:14.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:15.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:16 compute-1 nova_compute[230488]: 2025-12-07 10:12:16.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:16 compute-1 nova_compute[230488]: 2025-12-07 10:12:16.295 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:12:16 compute-1 nova_compute[230488]: 2025-12-07 10:12:16.295 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:12:16 compute-1 nova_compute[230488]: 2025-12-07 10:12:16.295 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:12:16 compute-1 nova_compute[230488]: 2025-12-07 10:12:16.296 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:12:16 compute-1 nova_compute[230488]: 2025-12-07 10:12:16.296 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:12:16 compute-1 ceph-mon[80077]: pgmap v901: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 07 10:12:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:12:16 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3062940473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:12:16 compute-1 nova_compute[230488]: 2025-12-07 10:12:16.774 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:12:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:16.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:16 compute-1 nova_compute[230488]: 2025-12-07 10:12:16.953 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:12:16 compute-1 nova_compute[230488]: 2025-12-07 10:12:16.955 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5261MB free_disk=59.942710876464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:12:16 compute-1 nova_compute[230488]: 2025-12-07 10:12:16.955 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:12:16 compute-1 nova_compute[230488]: 2025-12-07 10:12:16.955 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:12:17 compute-1 nova_compute[230488]: 2025-12-07 10:12:17.166 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:12:17 compute-1 nova_compute[230488]: 2025-12-07 10:12:17.167 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:12:17 compute-1 nova_compute[230488]: 2025-12-07 10:12:17.248 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:12:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:17.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:12:17 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1113685946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:12:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3062940473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:12:17 compute-1 ceph-mon[80077]: pgmap v902: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 3.3 KiB/s wr, 0 op/s
Dec 07 10:12:17 compute-1 nova_compute[230488]: 2025-12-07 10:12:17.677 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:12:17 compute-1 nova_compute[230488]: 2025-12-07 10:12:17.686 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:12:17 compute-1 nova_compute[230488]: 2025-12-07 10:12:17.705 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:12:17 compute-1 nova_compute[230488]: 2025-12-07 10:12:17.709 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:12:17 compute-1 nova_compute[230488]: 2025-12-07 10:12:17.710 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:12:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:12:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1113685946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:12:18 compute-1 nova_compute[230488]: 2025-12-07 10:12:18.711 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:18 compute-1 nova_compute[230488]: 2025-12-07 10:12:18.712 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:18 compute-1 nova_compute[230488]: 2025-12-07 10:12:18.712 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:18 compute-1 nova_compute[230488]: 2025-12-07 10:12:18.713 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:12:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:18.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:19 compute-1 nova_compute[230488]: 2025-12-07 10:12:19.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:19 compute-1 nova_compute[230488]: 2025-12-07 10:12:19.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:12:19 compute-1 nova_compute[230488]: 2025-12-07 10:12:19.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:12:19 compute-1 nova_compute[230488]: 2025-12-07 10:12:19.296 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:12:19 compute-1 nova_compute[230488]: 2025-12-07 10:12:19.297 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:19 compute-1 nova_compute[230488]: 2025-12-07 10:12:19.297 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:19.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:19 compute-1 ceph-mon[80077]: pgmap v903: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 4.0 KiB/s wr, 1 op/s
Dec 07 10:12:20 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4186351325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:12:20 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2438126098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:12:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:20.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:21 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:12:21.172 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:12:21 compute-1 nova_compute[230488]: 2025-12-07 10:12:21.288 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:21.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:21 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/900043742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:12:21 compute-1 ceph-mon[80077]: pgmap v904: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 8.7 KiB/s wr, 1 op/s
Dec 07 10:12:21 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1725739159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:12:22 compute-1 nova_compute[230488]: 2025-12-07 10:12:22.271 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:22 compute-1 podman[237490]: 2025-12-07 10:12:22.631109741 +0000 UTC m=+0.116823600 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:12:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:12:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:22.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4044736269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:12:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:23.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:23 compute-1 sudo[237516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:12:23 compute-1 sudo[237516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:12:23 compute-1 sudo[237516]: pam_unix(sudo:session): session closed for user root
Dec 07 10:12:24 compute-1 ceph-mon[80077]: pgmap v905: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 6.3 KiB/s wr, 1 op/s
Dec 07 10:12:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:24.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:25 compute-1 nova_compute[230488]: 2025-12-07 10:12:25.265 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:25 compute-1 nova_compute[230488]: 2025-12-07 10:12:25.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:25 compute-1 nova_compute[230488]: 2025-12-07 10:12:25.269 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 07 10:12:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:25.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:26 compute-1 ceph-mon[80077]: pgmap v906: 337 pgs: 337 active+clean; 163 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Dec 07 10:12:26 compute-1 nova_compute[230488]: 2025-12-07 10:12:26.282 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:26 compute-1 nova_compute[230488]: 2025-12-07 10:12:26.307 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:12:26 compute-1 nova_compute[230488]: 2025-12-07 10:12:26.308 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 07 10:12:26 compute-1 nova_compute[230488]: 2025-12-07 10:12:26.324 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 07 10:12:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:26.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:27.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:12:28 compute-1 ceph-mon[80077]: pgmap v907: 337 pgs: 337 active+clean; 163 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 26 op/s
Dec 07 10:12:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:12:28 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2217669224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:12:28 compute-1 podman[237544]: 2025-12-07 10:12:28.555516727 +0000 UTC m=+0.055745198 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 07 10:12:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:28.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:29 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2768967358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:12:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:29.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:30 compute-1 ceph-mon[80077]: pgmap v908: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:12:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:30.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:31.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:32 compute-1 ceph-mon[80077]: pgmap v909: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:12:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:12:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:32.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:33.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:34 compute-1 ceph-mon[80077]: pgmap v910: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:12:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:34.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:35 compute-1 sshd-session[237567]: Invalid user postgres from 104.248.193.130 port 42544
Dec 07 10:12:35 compute-1 sshd-session[237567]: Connection closed by invalid user postgres 104.248.193.130 port 42544 [preauth]
Dec 07 10:12:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:35.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:36 compute-1 ceph-mon[80077]: pgmap v911: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 07 10:12:36 compute-1 podman[237570]: 2025-12-07 10:12:36.59366651 +0000 UTC m=+0.093283980 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec 07 10:12:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:36.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:37.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:12:38 compute-1 ceph-mon[80077]: pgmap v912: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 60 KiB/s wr, 75 op/s
Dec 07 10:12:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:12:38.647 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:12:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:12:38.647 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:12:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:12:38.648 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:12:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:38.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.373883) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102359373915, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2383, "num_deletes": 251, "total_data_size": 6401802, "memory_usage": 6510448, "flush_reason": "Manual Compaction"}
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102359407077, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 4137595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26345, "largest_seqno": 28723, "table_properties": {"data_size": 4128037, "index_size": 5988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19825, "raw_average_key_size": 20, "raw_value_size": 4109046, "raw_average_value_size": 4218, "num_data_blocks": 262, "num_entries": 974, "num_filter_entries": 974, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102151, "oldest_key_time": 1765102151, "file_creation_time": 1765102359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 33373 microseconds, and 7447 cpu microseconds.
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.407249) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 4137595 bytes OK
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.407327) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.409576) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.409592) EVENT_LOG_v1 {"time_micros": 1765102359409588, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.409608) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6391398, prev total WAL file size 6391398, number of live WAL files 2.
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.411195) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(4040KB)], [51(12MB)]
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102359411220, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 17131767, "oldest_snapshot_seqno": -1}
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5961 keys, 15015493 bytes, temperature: kUnknown
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102359520296, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 15015493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14974738, "index_size": 24773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14917, "raw_key_size": 151512, "raw_average_key_size": 25, "raw_value_size": 14866308, "raw_average_value_size": 2493, "num_data_blocks": 1012, "num_entries": 5961, "num_filter_entries": 5961, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765102359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:12:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:12:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:39.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.520511) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 15015493 bytes
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.563870) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.0 rd, 137.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.4 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 6481, records dropped: 520 output_compression: NoCompression
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.563897) EVENT_LOG_v1 {"time_micros": 1765102359563886, "job": 30, "event": "compaction_finished", "compaction_time_micros": 109141, "compaction_time_cpu_micros": 27133, "output_level": 6, "num_output_files": 1, "total_output_size": 15015493, "num_input_records": 6481, "num_output_records": 5961, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102359564756, "job": 30, "event": "table_file_deletion", "file_number": 53}
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102359566959, "job": 30, "event": "table_file_deletion", "file_number": 51}
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.411160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.566992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.566997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.566999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.567000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:12:39 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:12:39.567002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:12:40 compute-1 ceph-mon[80077]: pgmap v913: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 60 KiB/s wr, 75 op/s
Dec 07 10:12:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:40.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:41.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:42 compute-1 ceph-mon[80077]: pgmap v914: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 07 10:12:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:12:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:42.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:12:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:43.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:44 compute-1 sudo[237593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:12:44 compute-1 sudo[237593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:12:44 compute-1 sudo[237593]: pam_unix(sudo:session): session closed for user root
Dec 07 10:12:44 compute-1 ceph-mon[80077]: pgmap v915: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 07 10:12:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:44.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:45.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:46 compute-1 ceph-mon[80077]: pgmap v916: 337 pgs: 337 active+clean; 188 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Dec 07 10:12:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:46.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:47.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:47 compute-1 ceph-mon[80077]: pgmap v917: 337 pgs: 337 active+clean; 188 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 2.0 MiB/s wr, 39 op/s
Dec 07 10:12:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:12:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:48.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:49.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:50 compute-1 ceph-mon[80077]: pgmap v918: 337 pgs: 337 active+clean; 196 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 203 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 07 10:12:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:50.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:51.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:52 compute-1 ceph-mon[80077]: pgmap v919: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 07 10:12:52 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:12:52.624 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:12:52 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:12:52.625 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:12:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:12:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:52.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:53.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:53 compute-1 podman[237623]: 2025-12-07 10:12:53.609504739 +0000 UTC m=+0.107414464 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 07 10:12:54 compute-1 ceph-mon[80077]: pgmap v920: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 07 10:12:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:12:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:54.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:12:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:55.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:56 compute-1 ceph-mon[80077]: pgmap v921: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 07 10:12:56 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:12:56.628 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:12:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:56.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:57.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:12:58 compute-1 ceph-mon[80077]: pgmap v922: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 110 KiB/s wr, 20 op/s
Dec 07 10:12:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:12:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:12:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:12:58.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:12:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:12:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:12:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:12:59.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:12:59 compute-1 podman[237652]: 2025-12-07 10:12:59.569855024 +0000 UTC m=+0.075151495 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 07 10:13:00 compute-1 ceph-mon[80077]: pgmap v923: 337 pgs: 337 active+clean; 134 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 112 KiB/s wr, 40 op/s
Dec 07 10:13:00 compute-1 nova_compute[230488]: 2025-12-07 10:13:00.472 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:13:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:00.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2974024516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:01.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:02 compute-1 ceph-mon[80077]: pgmap v924: 337 pgs: 337 active+clean; 121 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 31 KiB/s wr, 30 op/s
Dec 07 10:13:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:13:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:13:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:02.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:13:03 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/101303 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:13:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2670520558' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:13:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2670520558' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:13:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:03.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:04 compute-1 sudo[237675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:13:04 compute-1 sudo[237675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:13:04 compute-1 sudo[237675]: pam_unix(sudo:session): session closed for user root
Dec 07 10:13:04 compute-1 ceph-mon[80077]: pgmap v925: 337 pgs: 337 active+clean; 121 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 30 op/s
Dec 07 10:13:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:04.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:05.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:05 compute-1 sudo[237701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:13:05 compute-1 sudo[237701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:13:05 compute-1 sudo[237701]: pam_unix(sudo:session): session closed for user root
Dec 07 10:13:05 compute-1 sudo[237726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:13:05 compute-1 sudo[237726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:13:06 compute-1 ceph-mon[80077]: pgmap v926: 337 pgs: 337 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 30 op/s
Dec 07 10:13:06 compute-1 sudo[237726]: pam_unix(sudo:session): session closed for user root
Dec 07 10:13:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:06.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:07.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:07 compute-1 podman[237783]: 2025-12-07 10:13:07.580354134 +0000 UTC m=+0.074580961 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 07 10:13:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:13:08 compute-1 ceph-mon[80077]: pgmap v927: 337 pgs: 337 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 29 op/s
Dec 07 10:13:08 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2099550192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:08.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:09.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:13:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:13:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:13:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:13:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:13:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:13:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:13:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:13:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:13:09 compute-1 ceph-mon[80077]: pgmap v928: 337 pgs: 337 active+clean; 63 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 12 KiB/s wr, 50 op/s
Dec 07 10:13:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:10.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:11.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:12 compute-1 ceph-mon[80077]: pgmap v929: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 11 KiB/s wr, 38 op/s
Dec 07 10:13:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:13:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:12.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:13:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:13:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:13.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:13:13 compute-1 sudo[237805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:13:14 compute-1 sudo[237805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:13:14 compute-1 sudo[237805]: pam_unix(sudo:session): session closed for user root
Dec 07 10:13:14 compute-1 ceph-mon[80077]: pgmap v930: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 07 10:13:14 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:13:14 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:13:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:14.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:15.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:16 compute-1 nova_compute[230488]: 2025-12-07 10:13:16.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:13:16 compute-1 ceph-mon[80077]: pgmap v931: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Dec 07 10:13:16 compute-1 nova_compute[230488]: 2025-12-07 10:13:16.298 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:13:16 compute-1 nova_compute[230488]: 2025-12-07 10:13:16.299 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:13:16 compute-1 nova_compute[230488]: 2025-12-07 10:13:16.300 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:13:16 compute-1 nova_compute[230488]: 2025-12-07 10:13:16.300 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:13:16 compute-1 nova_compute[230488]: 2025-12-07 10:13:16.301 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:13:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:13:16 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1372150454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:16 compute-1 nova_compute[230488]: 2025-12-07 10:13:16.802 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:13:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:16.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.000 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.002 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5264MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.002 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.003 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.080 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.080 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.169 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing inventories for resource provider 58b51610-0751-43d9-94a3-66540bffec81 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.197 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Updating ProviderTree inventory for provider 58b51610-0751-43d9-94a3-66540bffec81 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.198 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Updating inventory in ProviderTree for provider 58b51610-0751-43d9-94a3-66540bffec81 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.221 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing aggregate associations for resource provider 58b51610-0751-43d9-94a3-66540bffec81, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.259 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing trait associations for resource provider 58b51610-0751-43d9-94a3-66540bffec81, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 07 10:13:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1372150454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.291 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:13:17 compute-1 sshd-session[237854]: Invalid user postgres from 104.248.193.130 port 58062
Dec 07 10:13:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:17.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:17 compute-1 sshd-session[237854]: Connection closed by invalid user postgres 104.248.193.130 port 58062 [preauth]
Dec 07 10:13:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:13:17 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3298746586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.755 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.763 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.778 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.782 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:13:17 compute-1 nova_compute[230488]: 2025-12-07 10:13:17.783 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:13:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:13:18 compute-1 ceph-mon[80077]: pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec 07 10:13:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3298746586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:18.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:19.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:20 compute-1 ceph-mon[80077]: pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec 07 10:13:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec 07 10:13:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec 07 10:13:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Dec 07 10:13:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Dec 07 10:13:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Dec 07 10:13:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Dec 07 10:13:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Dec 07 10:13:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Dec 07 10:13:20 compute-1 nova_compute[230488]: 2025-12-07 10:13:20.784 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:13:20 compute-1 nova_compute[230488]: 2025-12-07 10:13:20.784 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:13:20 compute-1 nova_compute[230488]: 2025-12-07 10:13:20.784 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:13:20 compute-1 nova_compute[230488]: 2025-12-07 10:13:20.785 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:13:20 compute-1 nova_compute[230488]: 2025-12-07 10:13:20.785 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:13:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:20.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:21 compute-1 nova_compute[230488]: 2025-12-07 10:13:21.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:13:21 compute-1 nova_compute[230488]: 2025-12-07 10:13:21.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:13:21 compute-1 nova_compute[230488]: 2025-12-07 10:13:21.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:13:21 compute-1 nova_compute[230488]: 2025-12-07 10:13:21.283 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:13:21 compute-1 nova_compute[230488]: 2025-12-07 10:13:21.283 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:13:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:21.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:22 compute-1 ceph-mon[80077]: pgmap v934: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 1.1 KiB/s wr, 8 op/s
Dec 07 10:13:22 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1503149317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:22 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2084543423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:22 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/576818489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:13:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:22.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4099492702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:23 compute-1 nova_compute[230488]: 2025-12-07 10:13:23.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:13:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:23.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:24 compute-1 ceph-mon[80077]: pgmap v935: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Dec 07 10:13:24 compute-1 sudo[237883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:13:24 compute-1 sudo[237883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:13:24 compute-1 sudo[237883]: pam_unix(sudo:session): session closed for user root
Dec 07 10:13:24 compute-1 podman[237907]: 2025-12-07 10:13:24.460283432 +0000 UTC m=+0.125513375 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 07 10:13:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:24.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:25 compute-1 sshd-session[237937]: Connection closed by 134.209.9.32 port 46652
Dec 07 10:13:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:25.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:26 compute-1 nova_compute[230488]: 2025-12-07 10:13:26.264 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:13:26 compute-1 ceph-mon[80077]: pgmap v936: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 341 B/s wr, 173 op/s
Dec 07 10:13:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:26.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:27.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:13:28 compute-1 ceph-mon[80077]: pgmap v937: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Dec 07 10:13:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:13:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:28.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:29.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:30 compute-1 ceph-mon[80077]: pgmap v938: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Dec 07 10:13:30 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1124760797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:30 compute-1 podman[237941]: 2025-12-07 10:13:30.598903186 +0000 UTC m=+0.094533063 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 07 10:13:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:30.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:31.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:32 compute-1 ceph-mon[80077]: pgmap v939: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 173 op/s
Dec 07 10:13:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:13:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:32.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:33.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:34 compute-1 ceph-mon[80077]: pgmap v940: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Dec 07 10:13:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3355234200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:13:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3875446047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:13:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:34.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:35.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:36 compute-1 ceph-mon[80077]: pgmap v941: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 1.8 MiB/s wr, 200 op/s
Dec 07 10:13:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:13:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:36.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:13:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:37.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:13:38 compute-1 ceph-mon[80077]: pgmap v942: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:13:38 compute-1 podman[237965]: 2025-12-07 10:13:38.58415418 +0000 UTC m=+0.084206533 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 07 10:13:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:13:38.649 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:13:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:13:38.649 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:13:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:13:38.650 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:13:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:38.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:39.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:40 compute-1 ceph-mon[80077]: pgmap v943: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Dec 07 10:13:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:13:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:40.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:13:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:41.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:42 compute-1 ceph-mon[80077]: pgmap v944: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 07 10:13:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:13:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:43.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:13:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:43.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:44 compute-1 sudo[237988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:13:44 compute-1 sudo[237988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:13:44 compute-1 sudo[237988]: pam_unix(sudo:session): session closed for user root
Dec 07 10:13:44 compute-1 ceph-mon[80077]: pgmap v945: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 07 10:13:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:45.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:45 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2415869173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:45.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:46 compute-1 ceph-mon[80077]: pgmap v946: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Dec 07 10:13:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:47.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:47.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:13:48 compute-1 sshd-session[238015]: Received disconnect from 193.46.255.244 port 53918:11:  [preauth]
Dec 07 10:13:48 compute-1 sshd-session[238015]: Disconnected from authenticating user root 193.46.255.244 port 53918 [preauth]
Dec 07 10:13:48 compute-1 ceph-mon[80077]: pgmap v947: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec 07 10:13:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:49.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:50 compute-1 ceph-mon[80077]: pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec 07 10:13:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:51.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:51.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:52 compute-1 ceph-mon[80077]: pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 13 KiB/s wr, 33 op/s
Dec 07 10:13:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:13:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:53.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:53 compute-1 ceph-mon[80077]: pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 07 10:13:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:53.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:54 compute-1 podman[238021]: 2025-12-07 10:13:54.610732849 +0000 UTC m=+0.110663231 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 07 10:13:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:55.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:55.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:56 compute-1 ceph-mon[80077]: pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 07 10:13:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:13:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:57.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:13:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:57.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:13:58 compute-1 ceph-mon[80077]: pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec 07 10:13:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:13:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1260759029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:13:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:13:59.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:13:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:13:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:13:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:13:59.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:00 compute-1 ceph-mon[80077]: pgmap v953: 337 pgs: 337 active+clean; 71 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Dec 07 10:14:00 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:14:00.422 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:14:00 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:14:00.423 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:14:00 compute-1 sshd-session[238050]: Invalid user postgres from 104.248.193.130 port 33936
Dec 07 10:14:00 compute-1 sshd-session[238050]: Connection closed by invalid user postgres 104.248.193.130 port 33936 [preauth]
Dec 07 10:14:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:01.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:01 compute-1 podman[238053]: 2025-12-07 10:14:01.60545504 +0000 UTC m=+0.092375495 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd)
Dec 07 10:14:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:01.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:02 compute-1 ceph-mon[80077]: pgmap v954: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:14:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:14:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:03.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/453120180' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:14:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/453120180' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:14:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4073604742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:14:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:03.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:04 compute-1 ceph-mon[80077]: pgmap v955: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:14:04 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3719801266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:14:04 compute-1 sudo[238076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:14:04 compute-1 sudo[238076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:14:04 compute-1 sudo[238076]: pam_unix(sudo:session): session closed for user root
Dec 07 10:14:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:05.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:05 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:14:05.425 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:14:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:05.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:06 compute-1 ceph-mon[80077]: pgmap v956: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:14:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:07.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:07.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:14:08 compute-1 ceph-mon[80077]: pgmap v957: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 07 10:14:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:09.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:09 compute-1 podman[238103]: 2025-12-07 10:14:09.584458172 +0000 UTC m=+0.078458796 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 07 10:14:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:09.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:10 compute-1 ceph-mon[80077]: pgmap v958: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 77 op/s
Dec 07 10:14:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [WARNING] 340/101410 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 07 10:14:10 compute-1 ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua[86566]: [ALERT] 340/101410 (4) : backend 'backend' has no server available!
Dec 07 10:14:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:14:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:11.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:14:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:11.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:12 compute-1 ceph-mon[80077]: pgmap v959: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 560 KiB/s wr, 77 op/s
Dec 07 10:14:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:14:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:13.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:14:13 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3195034800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:14:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:13.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:14 compute-1 sudo[238126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:14:14 compute-1 sudo[238126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:14:14 compute-1 sudo[238126]: pam_unix(sudo:session): session closed for user root
Dec 07 10:14:14 compute-1 sudo[238151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:14:14 compute-1 sudo[238151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:14:14 compute-1 ceph-mon[80077]: pgmap v960: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 07 10:14:14 compute-1 sudo[238151]: pam_unix(sudo:session): session closed for user root
Dec 07 10:14:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:15.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:14:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:14:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:14:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:14:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:14:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:14:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:14:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:15.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.300 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.300 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.301 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.301 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.302 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:14:16 compute-1 ceph-mon[80077]: pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Dec 07 10:14:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:14:16 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4281995187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.763 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.914 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.915 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5248MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.915 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.915 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.966 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.966 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:14:16 compute-1 nova_compute[230488]: 2025-12-07 10:14:16.984 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:14:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:17.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:14:17 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/893889581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:14:17 compute-1 nova_compute[230488]: 2025-12-07 10:14:17.415 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:14:17 compute-1 nova_compute[230488]: 2025-12-07 10:14:17.421 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:14:17 compute-1 nova_compute[230488]: 2025-12-07 10:14:17.459 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:14:17 compute-1 nova_compute[230488]: 2025-12-07 10:14:17.461 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:14:17 compute-1 nova_compute[230488]: 2025-12-07 10:14:17.461 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:14:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/4281995187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:14:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/893889581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:14:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:17.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:14:18 compute-1 ceph-mon[80077]: pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 KiB/s wr, 100 op/s
Dec 07 10:14:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:19.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:19 compute-1 ceph-mon[80077]: pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 KiB/s wr, 100 op/s
Dec 07 10:14:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:19.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:20 compute-1 sudo[238252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:14:20 compute-1 sudo[238252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:14:20 compute-1 sudo[238252]: pam_unix(sudo:session): session closed for user root
Dec 07 10:14:20 compute-1 nova_compute[230488]: 2025-12-07 10:14:20.461 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:14:20 compute-1 nova_compute[230488]: 2025-12-07 10:14:20.462 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:14:20 compute-1 nova_compute[230488]: 2025-12-07 10:14:20.462 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:14:20 compute-1 nova_compute[230488]: 2025-12-07 10:14:20.462 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:14:20 compute-1 nova_compute[230488]: 2025-12-07 10:14:20.463 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:14:21 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:14:21 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:14:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.003000080s ======
Dec 07 10:14:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:21.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec 07 10:14:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:21.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:22 compute-1 ceph-mon[80077]: pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 723 KiB/s rd, 1.3 KiB/s wr, 52 op/s
Dec 07 10:14:22 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/928876254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:14:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:14:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3847333765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:14:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1539702002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:14:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2582793599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:14:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:14:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:23.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:14:23 compute-1 nova_compute[230488]: 2025-12-07 10:14:23.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:14:23 compute-1 nova_compute[230488]: 2025-12-07 10:14:23.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:14:23 compute-1 nova_compute[230488]: 2025-12-07 10:14:23.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:14:23 compute-1 nova_compute[230488]: 2025-12-07 10:14:23.286 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:14:23 compute-1 nova_compute[230488]: 2025-12-07 10:14:23.286 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:14:23 compute-1 nova_compute[230488]: 2025-12-07 10:14:23.287 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:14:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:23.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:24 compute-1 ceph-mon[80077]: pgmap v965: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.3 KiB/s wr, 27 op/s
Dec 07 10:14:24 compute-1 sudo[238280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:14:24 compute-1 sudo[238280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:14:24 compute-1 sudo[238280]: pam_unix(sudo:session): session closed for user root
Dec 07 10:14:24 compute-1 podman[238304]: 2025-12-07 10:14:24.855900747 +0000 UTC m=+0.196427386 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 07 10:14:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:25.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:25.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:26 compute-1 ceph-mon[80077]: pgmap v966: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 28 op/s
Dec 07 10:14:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:27 compute-1 nova_compute[230488]: 2025-12-07 10:14:27.281 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:14:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:27.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:14:28 compute-1 ceph-mon[80077]: pgmap v967: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Dec 07 10:14:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:14:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:29.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:29 compute-1 nova_compute[230488]: 2025-12-07 10:14:29.264 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:14:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:29.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:30 compute-1 ceph-mon[80077]: pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Dec 07 10:14:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:31.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:14:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:31.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:14:32 compute-1 ceph-mon[80077]: pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 2 op/s
Dec 07 10:14:32 compute-1 podman[238336]: 2025-12-07 10:14:32.605872007 +0000 UTC m=+0.104075573 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Dec 07 10:14:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:14:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:33.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:33.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:34 compute-1 ceph-mon[80077]: pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Dec 07 10:14:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:35.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:35.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:36 compute-1 ceph-mon[80077]: pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s
Dec 07 10:14:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:37.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:37.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:14:38 compute-1 ceph-mon[80077]: pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:14:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:14:38.650 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:14:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:14:38.650 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:14:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:14:38.650 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:14:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:39.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:39.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:40 compute-1 ceph-mon[80077]: pgmap v973: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:14:40 compute-1 podman[238362]: 2025-12-07 10:14:40.597265487 +0000 UTC m=+0.087456451 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 07 10:14:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:41.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:41.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:42 compute-1 ceph-mon[80077]: pgmap v974: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:14:42 compute-1 sshd-session[238382]: Invalid user postgres from 104.248.193.130 port 39944
Dec 07 10:14:42 compute-1 sshd-session[238382]: Connection closed by invalid user postgres 104.248.193.130 port 39944 [preauth]
Dec 07 10:14:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:14:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:43.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:14:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:43.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:44 compute-1 ceph-mon[80077]: pgmap v975: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:14:44 compute-1 sudo[238385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:14:44 compute-1 sudo[238385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:14:44 compute-1 sudo[238385]: pam_unix(sudo:session): session closed for user root
Dec 07 10:14:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:45.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:45.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:46 compute-1 ceph-mon[80077]: pgmap v976: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:14:46 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3797059465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:14:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:47.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:47.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.785042) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102487785083, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1554, "num_deletes": 255, "total_data_size": 3724991, "memory_usage": 3776528, "flush_reason": "Manual Compaction"}
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102487803953, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2439275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28728, "largest_seqno": 30277, "table_properties": {"data_size": 2432856, "index_size": 3554, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13549, "raw_average_key_size": 19, "raw_value_size": 2419908, "raw_average_value_size": 3461, "num_data_blocks": 157, "num_entries": 699, "num_filter_entries": 699, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102360, "oldest_key_time": 1765102360, "file_creation_time": 1765102487, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 18980 microseconds, and 9757 cpu microseconds.
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.804017) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2439275 bytes OK
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.804042) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.806089) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.806111) EVENT_LOG_v1 {"time_micros": 1765102487806105, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.806134) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3717866, prev total WAL file size 3717866, number of live WAL files 2.
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.807557) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2382KB)], [54(14MB)]
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102487807603, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 17454768, "oldest_snapshot_seqno": -1}
Dec 07 10:14:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6134 keys, 17320028 bytes, temperature: kUnknown
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102487940929, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 17320028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17275705, "index_size": 27893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 156268, "raw_average_key_size": 25, "raw_value_size": 17161989, "raw_average_value_size": 2797, "num_data_blocks": 1143, "num_entries": 6134, "num_filter_entries": 6134, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765102487, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.941199) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17320028 bytes
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.942815) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.9 rd, 129.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 14.3 +0.0 blob) out(16.5 +0.0 blob), read-write-amplify(14.3) write-amplify(7.1) OK, records in: 6660, records dropped: 526 output_compression: NoCompression
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.942836) EVENT_LOG_v1 {"time_micros": 1765102487942826, "job": 32, "event": "compaction_finished", "compaction_time_micros": 133367, "compaction_time_cpu_micros": 56813, "output_level": 6, "num_output_files": 1, "total_output_size": 17320028, "num_input_records": 6660, "num_output_records": 6134, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102487943553, "job": 32, "event": "table_file_deletion", "file_number": 56}
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102487947028, "job": 32, "event": "table_file_deletion", "file_number": 54}
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.807500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.947351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.947364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.947368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.947371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:14:47 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:14:47.947375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:14:48 compute-1 ceph-mon[80077]: pgmap v977: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:14:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:49.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:49 compute-1 ceph-mon[80077]: pgmap v978: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:14:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:49.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:51.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:51.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:14:52 compute-1 ceph-mon[80077]: pgmap v979: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:14:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:14:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 07 10:14:52 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187181620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:14:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:53.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1786809290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:14:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/187181620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:14:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:53.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:54 compute-1 ceph-mon[80077]: pgmap v980: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:14:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:55.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:55 compute-1 podman[238415]: 2025-12-07 10:14:55.643821391 +0000 UTC m=+0.134777484 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 07 10:14:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:55.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:56 compute-1 ceph-mon[80077]: pgmap v981: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:14:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:57.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:57.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:14:58 compute-1 ceph-mon[80077]: pgmap v982: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:14:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:14:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:14:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:14:59.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:14:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:14:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:14:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:14:59.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:00 compute-1 ceph-mon[80077]: pgmap v983: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 07 10:15:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:01.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:01.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:02 compute-1 ceph-mon[80077]: pgmap v984: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 07 10:15:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:15:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:03.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2412400529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:15:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2412400529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:15:03 compute-1 podman[238446]: 2025-12-07 10:15:03.594514219 +0000 UTC m=+0.085099811 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd)
Dec 07 10:15:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:03.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:04 compute-1 ceph-mon[80077]: pgmap v985: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 07 10:15:04 compute-1 sudo[238468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:15:04 compute-1 sudo[238468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:15:04 compute-1 sudo[238468]: pam_unix(sudo:session): session closed for user root
Dec 07 10:15:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:05.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:05.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:06 compute-1 ceph-mon[80077]: pgmap v986: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Dec 07 10:15:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:07.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:07.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:15:08 compute-1 ceph-mon[80077]: pgmap v987: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 07 10:15:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:09.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:09.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:10 compute-1 ceph-mon[80077]: pgmap v988: 337 pgs: 337 active+clean; 121 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 07 10:15:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:11.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:11 compute-1 podman[238496]: 2025-12-07 10:15:11.601119289 +0000 UTC m=+0.094528916 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 07 10:15:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:11.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:12 compute-1 ceph-mon[80077]: pgmap v989: 337 pgs: 337 active+clean; 121 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 07 10:15:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:15:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:15:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:13.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:13.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:14 compute-1 ceph-mon[80077]: pgmap v990: 337 pgs: 337 active+clean; 121 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 07 10:15:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:15.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:15 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:15:15.633 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:15:15 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:15:15.634 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:15:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:15.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:16 compute-1 ceph-mon[80077]: pgmap v991: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 07 10:15:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:17.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:17 compute-1 nova_compute[230488]: 2025-12-07 10:15:17.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:15:17 compute-1 nova_compute[230488]: 2025-12-07 10:15:17.303 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:15:17 compute-1 nova_compute[230488]: 2025-12-07 10:15:17.304 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:15:17 compute-1 nova_compute[230488]: 2025-12-07 10:15:17.304 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:15:17 compute-1 nova_compute[230488]: 2025-12-07 10:15:17.304 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:15:17 compute-1 nova_compute[230488]: 2025-12-07 10:15:17.305 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:15:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:15:17 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/592720800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:15:17 compute-1 nova_compute[230488]: 2025-12-07 10:15:17.766 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:15:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:17.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:15:17 compute-1 nova_compute[230488]: 2025-12-07 10:15:17.975 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:15:17 compute-1 nova_compute[230488]: 2025-12-07 10:15:17.977 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5248MB free_disk=59.94289016723633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:15:17 compute-1 nova_compute[230488]: 2025-12-07 10:15:17.977 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:15:17 compute-1 nova_compute[230488]: 2025-12-07 10:15:17.977 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:15:18 compute-1 nova_compute[230488]: 2025-12-07 10:15:18.044 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:15:18 compute-1 nova_compute[230488]: 2025-12-07 10:15:18.045 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:15:18 compute-1 nova_compute[230488]: 2025-12-07 10:15:18.063 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:15:18 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:15:18 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3802415036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:15:18 compute-1 nova_compute[230488]: 2025-12-07 10:15:18.542 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:15:18 compute-1 nova_compute[230488]: 2025-12-07 10:15:18.549 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:15:18 compute-1 nova_compute[230488]: 2025-12-07 10:15:18.566 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:15:18 compute-1 nova_compute[230488]: 2025-12-07 10:15:18.569 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:15:18 compute-1 nova_compute[230488]: 2025-12-07 10:15:18.570 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:15:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:19.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:19 compute-1 ceph-mon[80077]: pgmap v992: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 07 10:15:19 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/592720800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:15:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:19.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:20 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3802415036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:15:20 compute-1 ceph-mon[80077]: pgmap v993: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 371 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Dec 07 10:15:20 compute-1 sudo[238564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:15:20 compute-1 sudo[238564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:15:20 compute-1 sudo[238564]: pam_unix(sudo:session): session closed for user root
Dec 07 10:15:20 compute-1 nova_compute[230488]: 2025-12-07 10:15:20.570 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:15:20 compute-1 nova_compute[230488]: 2025-12-07 10:15:20.571 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:15:20 compute-1 nova_compute[230488]: 2025-12-07 10:15:20.571 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:15:20 compute-1 sudo[238589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 07 10:15:20 compute-1 sudo[238589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:15:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:21.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:21 compute-1 nova_compute[230488]: 2025-12-07 10:15:21.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:15:21 compute-1 nova_compute[230488]: 2025-12-07 10:15:21.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:15:21 compute-1 podman[238684]: 2025-12-07 10:15:21.36513628 +0000 UTC m=+0.107482776 container exec 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 07 10:15:21 compute-1 podman[238684]: 2025-12-07 10:15:21.486971523 +0000 UTC m=+0.229318029 container exec_died 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 07 10:15:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:21.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:22 compute-1 podman[238803]: 2025-12-07 10:15:22.197457287 +0000 UTC m=+0.082011049 container exec 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 10:15:22 compute-1 podman[238803]: 2025-12-07 10:15:22.214123907 +0000 UTC m=+0.098677679 container exec_died 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 10:15:22 compute-1 ceph-mon[80077]: pgmap v994: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 12 KiB/s wr, 6 op/s
Dec 07 10:15:22 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:22 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:22 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 07 10:15:22 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3195755751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:15:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:15:23 compute-1 podman[238941]: 2025-12-07 10:15:23.075484349 +0000 UTC m=+0.090108647 container exec beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 10:15:23 compute-1 podman[238941]: 2025-12-07 10:15:23.088030998 +0000 UTC m=+0.102655296 container exec_died beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 10:15:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:23.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:23 compute-1 nova_compute[230488]: 2025-12-07 10:15:23.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:15:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2969326572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:15:23 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:23 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3211854813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:15:23 compute-1 podman[239008]: 2025-12-07 10:15:23.401582683 +0000 UTC m=+0.074222867 container exec 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, architecture=x86_64, version=2.2.4, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9)
Dec 07 10:15:23 compute-1 podman[239008]: 2025-12-07 10:15:23.439024765 +0000 UTC m=+0.111664929 container exec_died 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, vcs-type=git, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, name=keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793)
Dec 07 10:15:23 compute-1 sudo[238589]: pam_unix(sudo:session): session closed for user root
Dec 07 10:15:23 compute-1 sudo[239041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:15:23 compute-1 sudo[239041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:15:23 compute-1 sudo[239041]: pam_unix(sudo:session): session closed for user root
Dec 07 10:15:23 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:15:23.636 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:15:23 compute-1 sudo[239066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:15:23 compute-1 sudo[239066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:15:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:23.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:24 compute-1 nova_compute[230488]: 2025-12-07 10:15:24.268 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:15:24 compute-1 nova_compute[230488]: 2025-12-07 10:15:24.269 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:15:24 compute-1 nova_compute[230488]: 2025-12-07 10:15:24.269 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:15:24 compute-1 sudo[239066]: pam_unix(sudo:session): session closed for user root
Dec 07 10:15:24 compute-1 nova_compute[230488]: 2025-12-07 10:15:24.300 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:15:24 compute-1 nova_compute[230488]: 2025-12-07 10:15:24.300 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:15:24 compute-1 sudo[239122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:15:24 compute-1 sudo[239122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:15:24 compute-1 sudo[239122]: pam_unix(sudo:session): session closed for user root
Dec 07 10:15:24 compute-1 sudo[239147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 07 10:15:24 compute-1 sudo[239147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:15:24 compute-1 ceph-mon[80077]: pgmap v995: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 12 KiB/s wr, 6 op/s
Dec 07 10:15:24 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:24 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:24 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3211758301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:15:24 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:24 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:24 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 07 10:15:24 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/230914401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:15:24 compute-1 sudo[239147]: pam_unix(sudo:session): session closed for user root
Dec 07 10:15:24 compute-1 sudo[239190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:15:24 compute-1 sudo[239190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:15:24 compute-1 sudo[239190]: pam_unix(sudo:session): session closed for user root
Dec 07 10:15:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:25.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 07 10:15:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:15:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:15:25 compute-1 ceph-mon[80077]: pgmap v996: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 20 KiB/s wr, 36 op/s
Dec 07 10:15:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:15:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:15:25 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:15:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:25.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:26 compute-1 sshd-session[239215]: Invalid user postgres from 104.248.193.130 port 36914
Dec 07 10:15:26 compute-1 sshd-session[239215]: Connection closed by invalid user postgres 104.248.193.130 port 36914 [preauth]
Dec 07 10:15:26 compute-1 podman[239217]: 2025-12-07 10:15:26.198306486 +0000 UTC m=+0.125136824 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Dec 07 10:15:26 compute-1 ceph-mon[80077]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Dec 07 10:15:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:27.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:27 compute-1 nova_compute[230488]: 2025-12-07 10:15:27.296 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:15:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:27.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:15:28 compute-1 ceph-mon[80077]: pgmap v997: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 20 KiB/s wr, 31 op/s
Dec 07 10:15:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:15:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:29.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:29.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:30 compute-1 sudo[239248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:15:30 compute-1 sudo[239248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:15:30 compute-1 sudo[239248]: pam_unix(sudo:session): session closed for user root
Dec 07 10:15:30 compute-1 ceph-mon[80077]: pgmap v998: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 20 KiB/s wr, 31 op/s
Dec 07 10:15:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:15:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:31.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:31.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:32 compute-1 ceph-mon[80077]: pgmap v999: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.8 KiB/s wr, 30 op/s
Dec 07 10:15:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:15:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:33.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:33.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:34 compute-1 ceph-mon[80077]: pgmap v1000: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.8 KiB/s wr, 30 op/s
Dec 07 10:15:34 compute-1 podman[239275]: 2025-12-07 10:15:34.622912764 +0000 UTC m=+0.120706004 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 07 10:15:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:35.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:35.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:36 compute-1 ceph-mon[80077]: pgmap v1001: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.8 KiB/s wr, 30 op/s
Dec 07 10:15:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:37.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:37.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:15:38 compute-1 ceph-mon[80077]: pgmap v1002: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:15:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:15:38.651 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:15:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:15:38.652 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:15:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:15:38.652 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:15:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:39.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:39.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:40 compute-1 ceph-mon[80077]: pgmap v1003: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:15:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:41.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:41 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2425210097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:15:41 compute-1 ceph-mon[80077]: pgmap v1004: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:15:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:41.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:15:42 compute-1 podman[239301]: 2025-12-07 10:15:42.603378427 +0000 UTC m=+0.094439534 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 07 10:15:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:15:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:43.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:43 compute-1 ceph-mon[80077]: pgmap v1005: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:15:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:43.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:45.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:45 compute-1 sudo[239321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:15:45 compute-1 sudo[239321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:15:45 compute-1 sudo[239321]: pam_unix(sudo:session): session closed for user root
Dec 07 10:15:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:45.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:46 compute-1 ceph-mon[80077]: pgmap v1006: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:15:46 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3647541742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:15:47 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/949707003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:15:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:47.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:15:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:47.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:15:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:15:48 compute-1 ceph-mon[80077]: pgmap v1007: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:15:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:49.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:49.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:50 compute-1 ceph-mon[80077]: pgmap v1008: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 07 10:15:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:51.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:51.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:52 compute-1 ceph-mon[80077]: pgmap v1009: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 07 10:15:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:15:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:53.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:53.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:54 compute-1 ceph-mon[80077]: pgmap v1010: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 07 10:15:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:55.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:15:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:55.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:15:56 compute-1 ceph-mon[80077]: pgmap v1011: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 07 10:15:56 compute-1 podman[239352]: 2025-12-07 10:15:56.619730581 +0000 UTC m=+0.115935525 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec 07 10:15:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:57.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2534622480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:15:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:57.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:15:58 compute-1 ceph-mon[80077]: pgmap v1012: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 07 10:15:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:15:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:15:59.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:15:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:15:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:15:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:15:59.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:00 compute-1 ceph-mon[80077]: pgmap v1013: 337 pgs: 337 active+clean; 134 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.270581) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102560270676, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1326, "num_deletes": 501, "total_data_size": 2544120, "memory_usage": 2582864, "flush_reason": "Manual Compaction"}
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102560285376, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 1564458, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30282, "largest_seqno": 31603, "table_properties": {"data_size": 1559060, "index_size": 2346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15676, "raw_average_key_size": 19, "raw_value_size": 1546073, "raw_average_value_size": 1954, "num_data_blocks": 101, "num_entries": 791, "num_filter_entries": 791, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102488, "oldest_key_time": 1765102488, "file_creation_time": 1765102560, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 14862 microseconds, and 7835 cpu microseconds.
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.285437) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 1564458 bytes OK
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.285463) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.287094) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.287114) EVENT_LOG_v1 {"time_micros": 1765102560287107, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.287138) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 2536833, prev total WAL file size 2536833, number of live WAL files 2.
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.288419) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(1527KB)], [57(16MB)]
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102560288478, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 18884486, "oldest_snapshot_seqno": -1}
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5907 keys, 12778995 bytes, temperature: kUnknown
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102560401821, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 12778995, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12741334, "index_size": 21807, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 152705, "raw_average_key_size": 25, "raw_value_size": 12636623, "raw_average_value_size": 2139, "num_data_blocks": 874, "num_entries": 5907, "num_filter_entries": 5907, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765102560, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.402504) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12778995 bytes
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.404462) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.8 rd, 112.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 16.5 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(20.2) write-amplify(8.2) OK, records in: 6925, records dropped: 1018 output_compression: NoCompression
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.404478) EVENT_LOG_v1 {"time_micros": 1765102560404470, "job": 34, "event": "compaction_finished", "compaction_time_micros": 113874, "compaction_time_cpu_micros": 49769, "output_level": 6, "num_output_files": 1, "total_output_size": 12778995, "num_input_records": 6925, "num_output_records": 5907, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102560404805, "job": 34, "event": "table_file_deletion", "file_number": 59}
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102560407640, "job": 34, "event": "table_file_deletion", "file_number": 57}
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.288319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.407667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.407670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.407671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.407673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:16:00 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:16:00.407674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:16:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:16:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:01.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:16:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4241900761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:16:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:01.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:02 compute-1 ceph-mon[80077]: pgmap v1014: 337 pgs: 337 active+clean; 134 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Dec 07 10:16:02 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1985759486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:16:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:16:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:03.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/1522913095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:16:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/1522913095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:16:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:03.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:04 compute-1 ceph-mon[80077]: pgmap v1015: 337 pgs: 337 active+clean; 134 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Dec 07 10:16:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:05.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:05 compute-1 sudo[239383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:16:05 compute-1 sudo[239383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:16:05 compute-1 sudo[239383]: pam_unix(sudo:session): session closed for user root
Dec 07 10:16:05 compute-1 podman[239407]: 2025-12-07 10:16:05.359750075 +0000 UTC m=+0.064670969 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:16:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:05.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:06 compute-1 ceph-mon[80077]: pgmap v1016: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Dec 07 10:16:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:07.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:07.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:16:08 compute-1 ceph-mon[80077]: pgmap v1017: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 235 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Dec 07 10:16:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:16:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:09.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:16:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:09.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:10 compute-1 sshd-session[239433]: Invalid user postgres from 104.248.193.130 port 34120
Dec 07 10:16:10 compute-1 sshd-session[239433]: Connection closed by invalid user postgres 104.248.193.130 port 34120 [preauth]
Dec 07 10:16:10 compute-1 ceph-mon[80077]: pgmap v1018: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Dec 07 10:16:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:11.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:11.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:12 compute-1 ceph-mon[80077]: pgmap v1019: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Dec 07 10:16:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:16:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:13.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:16:13 compute-1 podman[239437]: 2025-12-07 10:16:13.549461484 +0000 UTC m=+0.059700115 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 07 10:16:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:13.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:14 compute-1 ceph-mon[80077]: pgmap v1020: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Dec 07 10:16:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:15.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:15.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:16 compute-1 ceph-mon[80077]: pgmap v1021: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Dec 07 10:16:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:17.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:17.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:16:18 compute-1 ceph-mon[80077]: pgmap v1022: 337 pgs: 337 active+clean; 167 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 70 op/s
Dec 07 10:16:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:19.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:19 compute-1 nova_compute[230488]: 2025-12-07 10:16:19.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:16:19 compute-1 nova_compute[230488]: 2025-12-07 10:16:19.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:16:19 compute-1 nova_compute[230488]: 2025-12-07 10:16:19.308 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:16:19 compute-1 nova_compute[230488]: 2025-12-07 10:16:19.308 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:16:19 compute-1 nova_compute[230488]: 2025-12-07 10:16:19.309 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:16:19 compute-1 nova_compute[230488]: 2025-12-07 10:16:19.309 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:16:19 compute-1 nova_compute[230488]: 2025-12-07 10:16:19.310 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:16:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:16:19 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3262896966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:16:19 compute-1 nova_compute[230488]: 2025-12-07 10:16:19.778 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:16:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:19.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:20 compute-1 nova_compute[230488]: 2025-12-07 10:16:20.003 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:16:20 compute-1 nova_compute[230488]: 2025-12-07 10:16:20.004 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5226MB free_disk=59.89735412597656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:16:20 compute-1 nova_compute[230488]: 2025-12-07 10:16:20.004 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:16:20 compute-1 nova_compute[230488]: 2025-12-07 10:16:20.005 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:16:20 compute-1 nova_compute[230488]: 2025-12-07 10:16:20.093 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:16:20 compute-1 nova_compute[230488]: 2025-12-07 10:16:20.094 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:16:20 compute-1 nova_compute[230488]: 2025-12-07 10:16:20.122 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:16:20 compute-1 ceph-mon[80077]: pgmap v1023: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Dec 07 10:16:20 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3262896966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:16:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:16:20 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3476472337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:16:20 compute-1 nova_compute[230488]: 2025-12-07 10:16:20.602 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:16:20 compute-1 nova_compute[230488]: 2025-12-07 10:16:20.609 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:16:20 compute-1 nova_compute[230488]: 2025-12-07 10:16:20.631 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:16:20 compute-1 nova_compute[230488]: 2025-12-07 10:16:20.634 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:16:20 compute-1 nova_compute[230488]: 2025-12-07 10:16:20.635 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:16:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:21.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:21 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3476472337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:16:21 compute-1 nova_compute[230488]: 2025-12-07 10:16:21.636 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:16:21 compute-1 nova_compute[230488]: 2025-12-07 10:16:21.636 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:16:21 compute-1 nova_compute[230488]: 2025-12-07 10:16:21.637 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:16:21 compute-1 nova_compute[230488]: 2025-12-07 10:16:21.637 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:16:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:21.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:22 compute-1 ceph-mon[80077]: pgmap v1024: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 07 10:16:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:16:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:23.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:23.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:24 compute-1 nova_compute[230488]: 2025-12-07 10:16:24.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:16:24 compute-1 nova_compute[230488]: 2025-12-07 10:16:24.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:16:24 compute-1 ceph-mon[80077]: pgmap v1025: 337 pgs: 337 active+clean; 200 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 07 10:16:24 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1504565590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:16:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:25.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:25 compute-1 nova_compute[230488]: 2025-12-07 10:16:25.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:16:25 compute-1 nova_compute[230488]: 2025-12-07 10:16:25.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:16:25 compute-1 nova_compute[230488]: 2025-12-07 10:16:25.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:16:25 compute-1 nova_compute[230488]: 2025-12-07 10:16:25.297 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:16:25 compute-1 sudo[239506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:16:25 compute-1 sudo[239506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:16:25 compute-1 sudo[239506]: pam_unix(sudo:session): session closed for user root
Dec 07 10:16:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/539489423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:16:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:25.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:25 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:16:25.921 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:16:25 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:16:25.923 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:16:26 compute-1 ceph-mon[80077]: pgmap v1026: 337 pgs: 337 active+clean; 200 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 07 10:16:26 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1399851131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:16:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:27.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:27 compute-1 ceph-mon[80077]: pgmap v1027: 337 pgs: 337 active+clean; 200 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 224 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Dec 07 10:16:27 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2983246267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:16:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:16:27 compute-1 podman[239532]: 2025-12-07 10:16:27.614588169 +0000 UTC m=+0.110232550 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 07 10:16:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:16:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:27.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:29.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:29 compute-1 nova_compute[230488]: 2025-12-07 10:16:29.292 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:16:29 compute-1 ceph-mon[80077]: pgmap v1028: 337 pgs: 337 active+clean; 200 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 229 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 07 10:16:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:29.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:30 compute-1 nova_compute[230488]: 2025-12-07 10:16:30.264 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:16:30 compute-1 sudo[239560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:16:30 compute-1 sudo[239560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:16:30 compute-1 sudo[239560]: pam_unix(sudo:session): session closed for user root
Dec 07 10:16:30 compute-1 sudo[239585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:16:30 compute-1 sudo[239585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:16:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:31.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:31 compute-1 sudo[239585]: pam_unix(sudo:session): session closed for user root
Dec 07 10:16:31 compute-1 ceph-mon[80077]: pgmap v1029: 337 pgs: 337 active+clean; 200 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 13 KiB/s wr, 1 op/s
Dec 07 10:16:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:16:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:16:31 compute-1 ceph-mon[80077]: pgmap v1030: 337 pgs: 337 active+clean; 200 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 15 KiB/s wr, 2 op/s
Dec 07 10:16:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:16:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:16:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:16:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:16:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:16:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:31.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:31 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:16:31.924 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:16:32 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/368416281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:16:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:16:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:33.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:33 compute-1 ceph-mon[80077]: pgmap v1031: 337 pgs: 337 active+clean; 121 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 24 KiB/s wr, 34 op/s
Dec 07 10:16:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 07 10:16:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:33.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 07 10:16:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:35.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:35 compute-1 podman[239643]: 2025-12-07 10:16:35.602417762 +0000 UTC m=+0.096545701 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true)
Dec 07 10:16:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:35.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:36 compute-1 ceph-mon[80077]: pgmap v1032: 337 pgs: 337 active+clean; 121 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 8.8 KiB/s wr, 33 op/s
Dec 07 10:16:36 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:16:36 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322907568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:16:36 compute-1 sudo[239665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:16:36 compute-1 sudo[239665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:16:36 compute-1 sudo[239665]: pam_unix(sudo:session): session closed for user root
Dec 07 10:16:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:37.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:37 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2322907568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:16:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:16:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:16:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:16:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:37.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:38 compute-1 ceph-mon[80077]: pgmap v1033: 337 pgs: 337 active+clean; 121 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 8.8 KiB/s wr, 33 op/s
Dec 07 10:16:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:16:38.653 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:16:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:16:38.654 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:16:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:16:38.654 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:16:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:39.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:39.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:40 compute-1 ceph-mon[80077]: pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 10 KiB/s wr, 64 op/s
Dec 07 10:16:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:41.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:41 compute-1 ceph-mon[80077]: pgmap v1035: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 10 KiB/s wr, 64 op/s
Dec 07 10:16:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:41.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:16:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:16:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:43.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:43 compute-1 ceph-mon[80077]: pgmap v1036: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 9.0 KiB/s wr, 57 op/s
Dec 07 10:16:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:43.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:44 compute-1 podman[239694]: 2025-12-07 10:16:44.621389013 +0000 UTC m=+0.111905826 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:16:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:45.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:45 compute-1 sudo[239714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:16:45 compute-1 sudo[239714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:16:45 compute-1 sudo[239714]: pam_unix(sudo:session): session closed for user root
Dec 07 10:16:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:45.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:47.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:47 compute-1 ceph-mon[80077]: pgmap v1037: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 07 10:16:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:16:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:47.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:48 compute-1 ceph-mon[80077]: pgmap v1038: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 07 10:16:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:16:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:49.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:16:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:16:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:49.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:16:50 compute-1 ceph-mon[80077]: pgmap v1039: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Dec 07 10:16:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:16:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:51.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:16:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:51.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:52 compute-1 ceph-mon[80077]: pgmap v1040: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:16:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:16:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:16:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:53.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:16:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:53.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:54 compute-1 sshd-session[239743]: Invalid user postgres from 104.248.193.130 port 36330
Dec 07 10:16:54 compute-1 sshd-session[239743]: Connection closed by invalid user postgres 104.248.193.130 port 36330 [preauth]
Dec 07 10:16:54 compute-1 ceph-mon[80077]: pgmap v1041: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:16:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:55.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:55 compute-1 ceph-mon[80077]: pgmap v1042: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:16:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:16:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:55.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:16:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2556038211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:16:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:16:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:57.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:16:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:16:57 compute-1 ceph-mon[80077]: pgmap v1043: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:16:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:16:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:57.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:16:58 compute-1 podman[239748]: 2025-12-07 10:16:58.718294166 +0000 UTC m=+0.209716449 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:16:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:16:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:16:59.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:16:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:16:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:16:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:16:59.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:00 compute-1 ceph-mon[80077]: pgmap v1044: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:17:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:01.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1224249925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:17:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1283011853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 07 10:17:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:01.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:02 compute-1 ceph-mon[80077]: pgmap v1045: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 07 10:17:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:17:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 07 10:17:02 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2623003367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:17:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 07 10:17:02 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2623003367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:17:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:03.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2623003367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:17:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2623003367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:17:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:03.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:04 compute-1 ceph-mon[80077]: pgmap v1046: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 07 10:17:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:05.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:05 compute-1 ceph-mon[80077]: pgmap v1047: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 07 10:17:05 compute-1 sudo[239777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:17:05 compute-1 sudo[239777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:17:05 compute-1 sudo[239777]: pam_unix(sudo:session): session closed for user root
Dec 07 10:17:05 compute-1 podman[239801]: 2025-12-07 10:17:05.73979258 +0000 UTC m=+0.083301003 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Dec 07 10:17:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:17:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:05.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:17:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:07.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:17:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:07.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:08 compute-1 ceph-mon[80077]: pgmap v1048: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 07 10:17:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:09.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:09.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:10 compute-1 ceph-mon[80077]: pgmap v1049: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 07 10:17:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:17:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:11.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:17:11 compute-1 ceph-mon[80077]: pgmap v1050: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 07 10:17:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:17:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:11.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:17:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:17:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:17:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:13.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:13 compute-1 ceph-mon[80077]: pgmap v1051: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Dec 07 10:17:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:13.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:17:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:15.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:17:15 compute-1 podman[239828]: 2025-12-07 10:17:15.419668527 +0000 UTC m=+0.104439364 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 07 10:17:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:15.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:16 compute-1 ceph-mon[80077]: pgmap v1052: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Dec 07 10:17:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:17.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:17:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:17.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:18 compute-1 ceph-mon[80077]: pgmap v1053: 337 pgs: 337 active+clean; 88 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Dec 07 10:17:19 compute-1 nova_compute[230488]: 2025-12-07 10:17:19.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:17:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:19.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:17:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:19.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:17:20 compute-1 ceph-mon[80077]: pgmap v1054: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.268 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.268 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.268 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:17:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:21.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.291 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.291 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.291 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.292 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.292 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:17:21 compute-1 ceph-mon[80077]: pgmap v1055: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 07 10:17:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:17:21 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3735647094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.752 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.953 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.954 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5269MB free_disk=59.94289016723633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.954 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:17:21 compute-1 nova_compute[230488]: 2025-12-07 10:17:21.955 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:17:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:21.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:22 compute-1 nova_compute[230488]: 2025-12-07 10:17:22.151 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:17:22 compute-1 nova_compute[230488]: 2025-12-07 10:17:22.152 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:17:22 compute-1 nova_compute[230488]: 2025-12-07 10:17:22.323 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:17:22 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3735647094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:17:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:17:22 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2441943791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:17:22 compute-1 nova_compute[230488]: 2025-12-07 10:17:22.786 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:17:22 compute-1 nova_compute[230488]: 2025-12-07 10:17:22.794 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:17:22 compute-1 nova_compute[230488]: 2025-12-07 10:17:22.812 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:17:22 compute-1 nova_compute[230488]: 2025-12-07 10:17:22.815 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:17:22 compute-1 nova_compute[230488]: 2025-12-07 10:17:22.815 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:17:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:17:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:23.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2441943791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:17:23 compute-1 ceph-mon[80077]: pgmap v1056: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 07 10:17:23 compute-1 nova_compute[230488]: 2025-12-07 10:17:23.817 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:17:23 compute-1 nova_compute[230488]: 2025-12-07 10:17:23.818 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:17:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:23.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:25 compute-1 nova_compute[230488]: 2025-12-07 10:17:25.268 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:17:25 compute-1 nova_compute[230488]: 2025-12-07 10:17:25.269 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:17:25 compute-1 nova_compute[230488]: 2025-12-07 10:17:25.269 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:17:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:25.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2745725524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:17:25 compute-1 nova_compute[230488]: 2025-12-07 10:17:25.293 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:17:25 compute-1 nova_compute[230488]: 2025-12-07 10:17:25.293 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:17:25 compute-1 nova_compute[230488]: 2025-12-07 10:17:25.294 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:17:25 compute-1 sudo[239897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:17:25 compute-1 sudo[239897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:17:25 compute-1 sudo[239897]: pam_unix(sudo:session): session closed for user root
Dec 07 10:17:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:25.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:26 compute-1 ceph-mon[80077]: pgmap v1057: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 07 10:17:26 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1993118706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:17:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:27.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:27 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/151595899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:17:27 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/667349036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:17:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:17:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:17:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:27.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:17:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:17:28 compute-1 ceph-mon[80077]: pgmap v1058: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 07 10:17:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:29.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:29 compute-1 podman[239924]: 2025-12-07 10:17:29.59871178 +0000 UTC m=+0.100332203 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 07 10:17:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:29.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:30 compute-1 nova_compute[230488]: 2025-12-07 10:17:30.277 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:17:30 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:17:30.396 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:17:30 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:17:30.398 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:17:30 compute-1 ceph-mon[80077]: pgmap v1059: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 07 10:17:31 compute-1 nova_compute[230488]: 2025-12-07 10:17:31.283 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:17:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:31.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:31.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:32 compute-1 nova_compute[230488]: 2025-12-07 10:17:32.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:17:32 compute-1 nova_compute[230488]: 2025-12-07 10:17:32.269 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 07 10:17:32 compute-1 nova_compute[230488]: 2025-12-07 10:17:32.432 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 07 10:17:32 compute-1 ceph-mon[80077]: pgmap v1060: 337 pgs: 337 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Dec 07 10:17:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:17:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:33.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:33 compute-1 ceph-mon[80077]: pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 29 op/s
Dec 07 10:17:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:33.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/861001628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:17:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:35.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:35 compute-1 ceph-mon[80077]: pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 07 10:17:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:35.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:36 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:17:36.402 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:17:36 compute-1 podman[239954]: 2025-12-07 10:17:36.610034295 +0000 UTC m=+0.098805331 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 07 10:17:37 compute-1 sudo[239975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:17:37 compute-1 sudo[239975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:17:37 compute-1 sudo[239975]: pam_unix(sudo:session): session closed for user root
Dec 07 10:17:37 compute-1 sudo[240000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:17:37 compute-1 sudo[240000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:17:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:37.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:37 compute-1 sshd-session[240023]: Invalid user postgres from 104.248.193.130 port 39352
Dec 07 10:17:37 compute-1 sshd-session[240023]: Connection closed by invalid user postgres 104.248.193.130 port 39352 [preauth]
Dec 07 10:17:37 compute-1 sudo[240000]: pam_unix(sudo:session): session closed for user root
Dec 07 10:17:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:17:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:17:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:37.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:17:38 compute-1 ceph-mon[80077]: pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 07 10:17:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:17:38.654 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:17:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:17:38.655 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:17:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:17:38.655 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:17:39 compute-1 nova_compute[230488]: 2025-12-07 10:17:39.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:17:39 compute-1 nova_compute[230488]: 2025-12-07 10:17:39.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 07 10:17:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:39.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:17:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:40.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:17:40 compute-1 ceph-mon[80077]: pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Dec 07 10:17:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:41.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:42.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:17:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:17:42 compute-1 ceph-mon[80077]: pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Dec 07 10:17:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:17:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:17:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:17:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:17:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:17:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:17:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:17:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:17:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:43.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:43 compute-1 ceph-mon[80077]: pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.6 KiB/s wr, 32 op/s
Dec 07 10:17:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:17:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:44.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:45.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:45 compute-1 ceph-mon[80077]: pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:17:45 compute-1 podman[240064]: 2025-12-07 10:17:45.618136764 +0000 UTC m=+0.109346066 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 07 10:17:45 compute-1 sudo[240086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:17:45 compute-1 sudo[240086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:17:45 compute-1 sudo[240086]: pam_unix(sudo:session): session closed for user root
Dec 07 10:17:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:17:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:46.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:17:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:47.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:47 compute-1 ceph-mon[80077]: pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:17:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:17:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:48.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:48 compute-1 sudo[240113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:17:48 compute-1 sudo[240113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:17:48 compute-1 sudo[240113]: pam_unix(sudo:session): session closed for user root
Dec 07 10:17:49 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:17:49 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:17:49 compute-1 ceph-mon[80077]: pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:17:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:49.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:50.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:51 compute-1 ceph-mon[80077]: pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:17:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:51.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:52.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:17:53 compute-1 ceph-mon[80077]: pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:17:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:53.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:54.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:55 compute-1 ceph-mon[80077]: pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:17:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:55.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:56.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:17:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:57.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:17:57 compute-1 ceph-mon[80077]: pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:17:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:17:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:17:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:17:58.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:17:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:17:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:17:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:17:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:17:59.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:17:59 compute-1 ceph-mon[80077]: pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:18:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:00.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:18:00 compute-1 podman[240144]: 2025-12-07 10:18:00.634186741 +0000 UTC m=+0.125494244 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 07 10:18:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:01.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:01 compute-1 ceph-mon[80077]: pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:18:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:02.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:18:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 07 10:18:02 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4160823244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:18:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 07 10:18:02 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4160823244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:18:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:03.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:03 compute-1 ceph-mon[80077]: pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/4160823244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:18:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/4160823244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:18:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:04.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:05.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:05 compute-1 ceph-mon[80077]: pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:18:05 compute-1 sudo[240172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:18:05 compute-1 sudo[240172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:18:05 compute-1 sudo[240172]: pam_unix(sudo:session): session closed for user root
Dec 07 10:18:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:06.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:07.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:07 compute-1 ceph-mon[80077]: pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:07 compute-1 podman[240198]: 2025-12-07 10:18:07.5941466 +0000 UTC m=+0.092596524 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 07 10:18:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:18:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:08.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:09.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:09 compute-1 ceph-mon[80077]: pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:10.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:11.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:11 compute-1 ceph-mon[80077]: pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:18:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:18:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:12.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:18:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:18:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:18:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:13.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:13 compute-1 ceph-mon[80077]: pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:14.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:15.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:15 compute-1 ceph-mon[80077]: pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:18:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:16.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:16 compute-1 podman[240224]: 2025-12-07 10:18:16.614057578 +0000 UTC m=+0.102997345 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:18:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:17.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:17 compute-1 ceph-mon[80077]: pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:18:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:18.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:18 compute-1 ceph-mon[80077]: pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:19.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:20.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:20 compute-1 sshd-session[240246]: Invalid user postgres from 104.248.193.130 port 47234
Dec 07 10:18:21 compute-1 sshd-session[240246]: Connection closed by invalid user postgres 104.248.193.130 port 47234 [preauth]
Dec 07 10:18:21 compute-1 ceph-mon[80077]: pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:18:21 compute-1 nova_compute[230488]: 2025-12-07 10:18:21.297 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:18:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:18:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:21.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:18:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:22.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:22 compute-1 nova_compute[230488]: 2025-12-07 10:18:22.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:18:22 compute-1 nova_compute[230488]: 2025-12-07 10:18:22.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:18:22 compute-1 nova_compute[230488]: 2025-12-07 10:18:22.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:18:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:18:23 compute-1 nova_compute[230488]: 2025-12-07 10:18:23.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:18:23 compute-1 nova_compute[230488]: 2025-12-07 10:18:23.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:18:23 compute-1 ceph-mon[80077]: pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:23 compute-1 nova_compute[230488]: 2025-12-07 10:18:23.302 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:18:23 compute-1 nova_compute[230488]: 2025-12-07 10:18:23.303 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:18:23 compute-1 nova_compute[230488]: 2025-12-07 10:18:23.304 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:18:23 compute-1 nova_compute[230488]: 2025-12-07 10:18:23.305 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:18:23 compute-1 nova_compute[230488]: 2025-12-07 10:18:23.306 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:18:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:23 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:18:23 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/837147587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:18:23 compute-1 nova_compute[230488]: 2025-12-07 10:18:23.753 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:18:23 compute-1 nova_compute[230488]: 2025-12-07 10:18:23.961 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:18:23 compute-1 nova_compute[230488]: 2025-12-07 10:18:23.963 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5234MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:18:23 compute-1 nova_compute[230488]: 2025-12-07 10:18:23.963 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:18:23 compute-1 nova_compute[230488]: 2025-12-07 10:18:23.963 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.037 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.038 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.064 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing inventories for resource provider 58b51610-0751-43d9-94a3-66540bffec81 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 07 10:18:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:18:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:24.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.224 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Updating ProviderTree inventory for provider 58b51610-0751-43d9-94a3-66540bffec81 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.225 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Updating inventory in ProviderTree for provider 58b51610-0751-43d9-94a3-66540bffec81 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.261 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing aggregate associations for resource provider 58b51610-0751-43d9-94a3-66540bffec81, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.298 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing trait associations for resource provider 58b51610-0751-43d9-94a3-66540bffec81, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 07 10:18:24 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/837147587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.315 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:18:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:18:24 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/223051144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.765 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.771 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.786 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.787 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:18:24 compute-1 nova_compute[230488]: 2025-12-07 10:18:24.788 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:18:25 compute-1 ceph-mon[80077]: pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:18:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/223051144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:18:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:25.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:26 compute-1 sudo[240294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:18:26 compute-1 sudo[240294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:18:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:26.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:26 compute-1 sudo[240294]: pam_unix(sudo:session): session closed for user root
Dec 07 10:18:26 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2978054548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:18:27 compute-1 ceph-mon[80077]: pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:27 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2419608147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:18:27 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4031027221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:18:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:27.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:27 compute-1 nova_compute[230488]: 2025-12-07 10:18:27.787 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:18:27 compute-1 nova_compute[230488]: 2025-12-07 10:18:27.787 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:18:27 compute-1 nova_compute[230488]: 2025-12-07 10:18:27.787 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:18:27 compute-1 nova_compute[230488]: 2025-12-07 10:18:27.820 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:18:27 compute-1 nova_compute[230488]: 2025-12-07 10:18:27.820 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:18:27 compute-1 nova_compute[230488]: 2025-12-07 10:18:27.820 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:18:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:18:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:18:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:28.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:18:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:18:29 compute-1 ceph-mon[80077]: pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:29 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/305949777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:18:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:29.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:30.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:30 compute-1 nova_compute[230488]: 2025-12-07 10:18:30.297 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:18:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:31.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:31 compute-1 ceph-mon[80077]: pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:18:31 compute-1 podman[240322]: 2025-12-07 10:18:31.625379446 +0000 UTC m=+0.106301504 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 07 10:18:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:32.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:32 compute-1 nova_compute[230488]: 2025-12-07 10:18:32.283 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:18:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:18:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:18:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:33.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:18:33 compute-1 ceph-mon[80077]: pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:34.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:35.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:35 compute-1 ceph-mon[80077]: pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:18:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:36.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:37.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:37 compute-1 sshd-session[240351]: Accepted publickey for zuul from 192.168.122.10 port 42720 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 10:18:37 compute-1 systemd-logind[796]: New session 55 of user zuul.
Dec 07 10:18:37 compute-1 ceph-mon[80077]: pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:37 compute-1 systemd[1]: Started Session 55 of User zuul.
Dec 07 10:18:37 compute-1 sshd-session[240351]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 10:18:37 compute-1 sudo[240355]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 07 10:18:37 compute-1 sudo[240355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:18:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:18:38 compute-1 podman[240389]: 2025-12-07 10:18:38.010031948 +0000 UTC m=+0.132681157 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 07 10:18:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:38.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:18:38.654 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:18:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:18:38.655 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:18:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:18:38.655 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:18:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:39.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:39 compute-1 ceph-mon[80077]: pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:18:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:40.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:18:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:41.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:41 compute-1 ceph-mon[80077]: pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:18:41 compute-1 ceph-mon[80077]: from='client.26170 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:41 compute-1 ceph-mon[80077]: from='client.25793 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:41 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec 07 10:18:41 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2806145263' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 07 10:18:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:42.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:42 compute-1 ceph-mon[80077]: from='client.16668 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:42 compute-1 ceph-mon[80077]: from='client.26182 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:42 compute-1 ceph-mon[80077]: from='client.25802 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:42 compute-1 ceph-mon[80077]: from='client.16680 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:42 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/157332656' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 07 10:18:42 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2806145263' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 07 10:18:42 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3015966285' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 07 10:18:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:18:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:18:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:43.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:43 compute-1 ceph-mon[80077]: pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:44.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:45.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:46 compute-1 ceph-mon[80077]: pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:18:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:46.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:46 compute-1 sudo[240721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:18:46 compute-1 sudo[240721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:18:46 compute-1 sudo[240721]: pam_unix(sudo:session): session closed for user root
Dec 07 10:18:47 compute-1 ceph-mon[80077]: pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:47 compute-1 podman[240750]: 2025-12-07 10:18:47.194149765 +0000 UTC m=+0.083886758 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 07 10:18:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:47.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:18:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:48.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:48 compute-1 ovs-vsctl[240796]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 07 10:18:48 compute-1 sudo[240826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:18:48 compute-1 sudo[240826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:18:48 compute-1 sudo[240826]: pam_unix(sudo:session): session closed for user root
Dec 07 10:18:48 compute-1 sudo[240859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:18:48 compute-1 sudo[240859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:18:49 compute-1 sudo[240859]: pam_unix(sudo:session): session closed for user root
Dec 07 10:18:49 compute-1 virtqemud[229835]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 07 10:18:49 compute-1 ceph-mon[80077]: pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:18:49 compute-1 virtqemud[229835]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 07 10:18:49 compute-1 virtqemud[229835]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 07 10:18:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:49.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:49 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: cache status {prefix=cache status} (starting...)
Dec 07 10:18:49 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:18:50 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: client ls {prefix=client ls} (starting...)
Dec 07 10:18:50 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:18:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:50.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:50 compute-1 lvm[241228]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 07 10:18:50 compute-1 lvm[241228]: VG ceph_vg0 finished
Dec 07 10:18:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:18:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:18:50 compute-1 ceph-mon[80077]: pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 07 10:18:50 compute-1 ceph-mon[80077]: pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1 op/s
Dec 07 10:18:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:18:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:18:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:18:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:18:50 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:18:50 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: damage ls {prefix=damage ls} (starting...)
Dec 07 10:18:50 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:18:50 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: dump loads {prefix=dump loads} (starting...)
Dec 07 10:18:50 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:18:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 07 10:18:50 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2914638343' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 07 10:18:50 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 07 10:18:50 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:18:51 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 07 10:18:51 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:18:51 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 07 10:18:51 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:18:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 07 10:18:51 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3634784853' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:18:51 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3768533798' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 07 10:18:51 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2914638343' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 07 10:18:51 compute-1 ceph-mon[80077]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 07 10:18:51 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2780908691' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:18:51 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3634784853' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:18:51 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 07 10:18:51 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:18:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:51.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:51 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 07 10:18:51 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:18:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec 07 10:18:51 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2288752549' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 07 10:18:51 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 07 10:18:51 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:18:52 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: ops {prefix=ops} (starting...)
Dec 07 10:18:52 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:18:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:52.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 07 10:18:52 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1821197214' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 07 10:18:52 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/846436850' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.25820 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.16704 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.25841 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.16722 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.26212 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1 op/s
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.25865 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.16740 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1851089738' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1218753477' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2288752549' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4225188626' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2127122219' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1821197214' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/846436850' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 07 10:18:52 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/227543812' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:18:52 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: session ls {prefix=session ls} (starting...)
Dec 07 10:18:52 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:18:52 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: status {prefix=status} (starting...)
Dec 07 10:18:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:18:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 07 10:18:53 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1531641723' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.26227 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.25880 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.26242 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.16773 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/223410288' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.26254 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.16800 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2001357645' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.25916 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/227543812' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2076272538' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/87309197' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2401558198' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1531641723' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:18:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:53.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 07 10:18:53 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2472298548' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 07 10:18:53 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/929755951' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:18:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 07 10:18:53 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/512280981' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 07 10:18:54 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/498771494' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:18:54 compute-1 sudo[241766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:18:54 compute-1 sudo[241766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:18:54 compute-1 sudo[241766]: pam_unix(sudo:session): session closed for user root
Dec 07 10:18:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:18:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:54.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.16821 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.25943 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 675 B/s rd, 0 op/s
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.26278 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/773047032' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/340637457' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3917696844' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2472298548' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/929755951' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1109532700' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2997670805' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/512280981' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/152797592' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/498771494' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4061193771' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 07 10:18:54 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1963022008' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 07 10:18:54 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/330127799' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 07 10:18:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 07 10:18:54 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1135010580' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 07 10:18:55 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2962791439' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.26290 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.16881 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.25997 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2151030524' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/982302260' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1963022008' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2592046621' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2007917552' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2072173053' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/330127799' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1135010580' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2323889937' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1348909976' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3718978622' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1175056470' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2962791439' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 07 10:18:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:55.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 07 10:18:55 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2287235129' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:18:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 07 10:18:55 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1931337441' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:18:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:56.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 07 10:18:56 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2045640580' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:18:56 compute-1 ceph-mon[80077]: from='client.26329 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:56 compute-1 ceph-mon[80077]: from='client.26042 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:56 compute-1 ceph-mon[80077]: pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1 op/s
Dec 07 10:18:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3021526834' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 07 10:18:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2287235129' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:18:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2991316428' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:18:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1931337441' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:18:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4118430938' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 07 10:18:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/295760253' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:18:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2045640580' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:34.730946+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 4702208 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977091 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:35.731125+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 4702208 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:36.731303+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 4694016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:37.731487+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 4694016 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:38.731688+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 4685824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:39.731821+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 4685824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977091 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:40.731951+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 4677632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:41.732114+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 4677632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:42.732291+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 4677632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:43.732451+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 4669440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:44.732612+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 4669440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977091 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:45.732918+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:46.733103+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:47.733282+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:48.733543+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 4653056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:49.733731+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 4653056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977091 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:50.733863+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 4644864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:51.734045+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 4644864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f88ad400 session 0x5613fb494f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:52.734228+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 4636672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:53.734444+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 4620288 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:54.734605+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 4612096 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977091 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:55.734998+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 4612096 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:56.735226+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 4603904 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:57.735402+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 4603904 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fa2f3c00 session 0x5613fb3ada40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:58.735580+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 4587520 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:46:59.735812+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 4587520 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977091 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:00.735954+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 4587520 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:01.736163+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 4579328 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:02.736399+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 4579328 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.197807312s of 41.303417206s, submitted: 57
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:03.736573+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 4579328 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:04.736731+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 4571136 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:05.736879+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977223 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 4571136 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:06.737033+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 4562944 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:07.737199+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 4562944 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:08.737406+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 4554752 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9e8000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:09.737590+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 4546560 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:10.737788+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977263 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 4546560 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:11.737936+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 4538368 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:12.738098+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 4538368 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:13.738273+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.113190651s of 10.354475021s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 4521984 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:14.738437+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 4521984 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:15.738596+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978775 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 4513792 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:16.738770+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 4513792 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:17.738909+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 4513792 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:18.739088+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 4497408 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:19.739266+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 4497408 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:20.739461+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978775 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 4464640 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:21.739599+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 4464640 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:22.739785+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 4456448 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:23.739998+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 4456448 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.222052574s of 10.481109619s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:24.740200+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 4456448 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:25.740383+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 4448256 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:26.740606+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 4448256 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:27.741315+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 4440064 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:28.741523+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 4440064 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:29.741675+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 4440064 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:30.741852+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 4431872 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:31.742080+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 4431872 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:32.742231+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 4423680 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:33.742426+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 4423680 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:34.742659+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 4423680 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:35.742834+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 4415488 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:36.743071+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 4415488 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:37.743247+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 4407296 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:38.743511+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 4407296 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:39.743674+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 4399104 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:40.743880+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 4399104 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:41.744066+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 4390912 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:42.744195+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 4390912 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:43.744339+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 4382720 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:44.744535+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 4374528 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:45.744717+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 4374528 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:46.744892+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 4366336 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:47.745013+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 4366336 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:48.745176+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 4366336 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:49.745385+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 4358144 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:50.745571+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 4358144 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:51.745772+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 4341760 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:52.745951+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 4341760 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:53.746147+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 4341760 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:54.746406+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 4333568 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:55.746557+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 4333568 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:56.746739+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 4325376 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:57.746928+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 4325376 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:58.747109+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 4325376 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:47:59.747273+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 4317184 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:00.747471+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 4308992 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:01.747715+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 4300800 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:02.747989+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 4300800 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:03.748181+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 4284416 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:04.748360+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 4284416 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:05.748513+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 4276224 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:06.748702+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 4276224 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:07.748868+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 4276224 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:08.749092+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 4268032 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:09.749239+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 4268032 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:10.749522+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 4259840 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:11.751027+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 4259840 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:12.751199+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 4259840 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:13.751438+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 4251648 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:14.751693+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 4251648 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:15.751866+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 4243456 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:16.752076+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 4243456 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:17.753007+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 4243456 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:18.753250+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 4235264 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:19.753429+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 4235264 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:20.753714+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 4227072 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:21.753964+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 4227072 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:22.754171+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 4227072 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:23.754404+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 4218880 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:24.754583+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 4218880 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:25.754762+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 4210688 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:26.754931+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 4210688 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:27.755139+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 4202496 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:28.755309+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 4194304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:29.755453+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 4194304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:30.755681+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 4186112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:31.755897+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 4186112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:32.756110+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 4177920 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:33.756306+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 4177920 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:34.756503+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90000 session 0x5613fb689e00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f9e95c00 session 0x5613faef61e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 4169728 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:35.756690+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 4169728 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:36.756863+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 4161536 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:37.757280+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 4161536 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:38.757475+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 4161536 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:39.757604+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 4153344 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:40.757790+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977920 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 4153344 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:41.757961+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 4153344 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:42.758096+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 4145152 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:43.760783+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 4145152 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:44.760968+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 4136960 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:45.761134+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 81.224777222s of 81.616699219s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978052 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 4136960 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:46.761291+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 4136960 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:47.761454+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 4128768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:48.761668+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 4128768 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:49.761847+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 4120576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:50.762004+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979564 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 4120576 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:51.762188+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 4112384 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:52.762367+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 4112384 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:53.762563+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 4104192 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:54.762772+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 4096000 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:55.762961+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979564 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 4096000 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:56.763177+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 4096000 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:57.763363+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.925945282s of 12.271673203s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 4087808 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:58.763705+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 4079616 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:48:59.763898+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 4079616 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:00.764151+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978841 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 4071424 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:01.764372+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 4071424 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:02.764558+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 4071424 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:03.764758+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 4063232 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:04.764918+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 4063232 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:05.765084+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90800 session 0x5613fac73a40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90400 session 0x5613f9f3de00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978841 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 4063232 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:06.765315+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 4055040 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:07.765537+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82935808 unmapped: 4055040 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:08.765771+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 4046848 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:09.765913+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 4046848 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:10.766111+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978841 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 4046848 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:11.766409+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:12.766584+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82952192 unmapped: 4038656 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:13.766809+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 4030464 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:14.767027+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 4030464 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:15.767246+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15dc00 session 0x5613fa2123c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978841 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82960384 unmapped: 4030464 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:16.767421+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f88ad400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.801702499s of 18.811840057s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 3997696 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:17.767583+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 82993152 unmapped: 3997696 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:18.767809+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:19.767987+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:20.768208+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978973 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83001344 unmapped: 3989504 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:21.768388+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83009536 unmapped: 3981312 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:22.768596+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83017728 unmapped: 3973120 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:23.768803+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 3964928 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:24.768963+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 3964928 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:25.769093+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980485 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fa2f3c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:26.769238+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 3956736 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:27.769387+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 3956736 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:28.769563+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83034112 unmapped: 3956736 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:29.769795+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 3948544 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.799379349s of 12.811473846s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:30.769998+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83042304 unmapped: 3948544 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979894 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:31.770159+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:32.770312+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:33.770501+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83050496 unmapped: 3940352 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:34.770707+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:35.770861+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83058688 unmapped: 3932160 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979894 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:36.771096+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 3923968 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:37.771330+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 3923968 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:38.771669+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 3915776 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:39.771871+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 3915776 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:40.772086+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83075072 unmapped: 3915776 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979303 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:41.772371+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 3907584 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:42.772692+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 3907584 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:43.772928+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 3907584 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.264689445s of 14.283196449s, submitted: 4
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:44.773145+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 3891200 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:45.773287+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 3891200 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979171 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:46.773434+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83099648 unmapped: 3891200 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:47.773684+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83107840 unmapped: 3883008 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:48.773882+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 3866624 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:49.774076+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 3866624 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:50.774271+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83124224 unmapped: 3866624 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979171 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:51.774465+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83132416 unmapped: 3858432 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:52.774696+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 3850240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f9e95c00 session 0x5613f9dd34a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f88ad400 session 0x5613faef6b40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:53.774869+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83140608 unmapped: 3850240 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:54.775023+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83148800 unmapped: 3842048 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:55.775191+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83156992 unmapped: 3833856 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979171 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:56.775348+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 3825664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:57.775557+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83165184 unmapped: 3825664 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:58.775836+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 3817472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:49:59.775990+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 3817472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:00.776173+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 3817472 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979171 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:01.776341+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 3809280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:02.776699+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 3809280 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:03.776888+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 3801088 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.194709778s of 20.198877335s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:04.777031+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 3776512 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:05.777209+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 3768320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979303 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:06.777395+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83222528 unmapped: 3768320 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:07.777558+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83238912 unmapped: 3751936 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:08.777781+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 3743744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:09.777971+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 3743744 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:10.778138+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 3735552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980815 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:11.778287+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83255296 unmapped: 3735552 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:12.778437+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 3727360 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:13.778590+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83263488 unmapped: 3727360 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 8331 writes, 34K keys, 8331 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8331 writes, 1678 syncs, 4.96 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8331 writes, 34K keys, 8331 commit groups, 1.0 writes per commit group, ingest: 21.61 MB, 0.04 MB/s
                                           Interval WAL: 8331 writes, 1678 syncs, 4.96 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:14.778735+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 3661824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:15.778963+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 3661824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980815 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:16.779243+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 3661824 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:17.779477+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 3653632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:18.779715+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 3653632 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:19.779861+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 3645440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15d400 session 0x5613fb494d20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15d800 session 0x5613f90683c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:20.780477+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 3645440 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980815 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:21.780675+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.692077637s of 17.700977325s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 3637248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:22.780835+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 3637248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:23.781025+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 3637248 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:24.781179+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 3629056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:25.781354+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 3629056 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980683 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:26.781570+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 3620864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:27.781795+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 3620864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:28.782056+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 3620864 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:29.782235+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 3612672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:30.782368+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 3612672 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f88ad400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980815 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:31.782612+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 3604480 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:32.782798+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 3596288 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:33.782962+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 3588096 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.072896957s of 12.167848587s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:34.783137+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 3579904 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:35.783299+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb2cab40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90c00 session 0x5613fb6894a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 3579904 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982327 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:36.783533+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 3579904 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:37.783687+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 3571712 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:38.784284+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 3571712 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:39.784451+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 3563520 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:40.784610+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 3563520 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982327 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:41.784770+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 3563520 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:42.784973+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90000 session 0x5613f9559e00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fa2f3c00 session 0x5613fc42c960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 3555328 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:43.785185+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 3555328 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.927210808s of 10.081953049s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:44.785341+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 3547136 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:45.785505+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 3547136 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982327 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:46.785711+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 3547136 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:47.785963+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 3538944 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:48.786198+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 3538944 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:49.786447+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 3530752 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:50.786664+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 3530752 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:51.786828+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982327 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 3522560 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:52.787013+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 3522560 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:53.787185+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 3522560 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:54.787387+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.491564751s of 10.833621025s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 3514368 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:55.787667+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 3514368 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:56.787875+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983971 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 3506176 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:57.788095+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 3506176 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:58.788312+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 3506176 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:50:59.788495+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 3497984 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:00.788764+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 3497984 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:01.789059+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984892 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 2424832 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:02.789207+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 2424832 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:03.789361+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 2416640 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:04.789548+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 2416640 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:05.789725+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 2416640 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.610193253s of 11.685529709s, submitted: 4
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:06.789878+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984037 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 2408448 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:07.790050+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 2408448 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:08.790256+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 2400256 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:09.790477+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 2392064 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:10.790689+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 2392064 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:11.790830+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984037 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 2383872 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:12.791000+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 2383872 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:13.791160+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 2383872 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:14.791338+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 2375680 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:15.791486+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 2375680 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:16.791677+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984037 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 2367488 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:17.791832+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 2367488 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:18.792038+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 2359296 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:19.792223+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:20.834411+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 2359296 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:21.834582+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 2359296 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984037 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:22.834761+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 2351104 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:23.834914+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 2342912 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:24.835056+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 2334720 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:25.835204+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 2334720 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:26.835364+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 2334720 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984037 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:27.835510+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 2326528 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:28.835693+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 2326528 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:29.835835+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 2318336 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:30.835971+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 2318336 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:31.838252+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 2318336 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984037 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90c00 session 0x5613f9f3fe00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f88ad400 session 0x5613fb2cb680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:32.838593+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 2310144 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:33.838917+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 2310144 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:34.839081+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 2301952 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:35.839213+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 2301952 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:36.839350+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 2301952 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984037 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:37.839480+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 2293760 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:38.840138+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 2293760 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:39.840282+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 2285568 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:40.840424+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 2285568 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:41.840567+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 2277376 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984037 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:42.840790+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 2277376 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.408725739s of 36.416835785s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:43.840994+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 2277376 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:44.841289+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 2269184 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:45.841466+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 2269184 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:46.841636+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 2260992 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984169 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:47.841884+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 2260992 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:48.842092+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 2252800 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:49.842238+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 2252800 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:50.842386+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 2252800 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:51.842530+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 2244608 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984169 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:52.842675+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 2244608 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:53.842806+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 2236416 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:54.842942+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 2236416 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:55.843065+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.763906479s of 12.768121719s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 2228224 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:56.843204+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 2220032 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:57.843385+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 2220032 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:58.843564+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 2211840 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:51:59.843687+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 2211840 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:00.843915+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 2203648 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:01.844124+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 2203648 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:02.844286+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 2195456 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:03.844402+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 2195456 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:04.844539+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 2195456 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:05.844706+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 2187264 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:06.844864+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 2187264 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:07.845023+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 2179072 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:08.845210+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 2179072 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:09.845348+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 2179072 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:10.845485+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 2170880 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:11.845642+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 2170880 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:12.845786+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 2162688 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:13.845912+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 2162688 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:14.846065+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 2162688 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:15.846229+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 2154496 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:16.846430+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 2154496 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:17.846726+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 2154496 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:18.847019+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 2154496 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:19.847324+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 2154496 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:20.847603+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 2154496 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:21.847801+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:22.847981+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:23.848192+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:24.848375+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:25.848546+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:26.848722+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:27.848919+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:28.849095+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:29.849247+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:30.849465+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:31.849699+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:32.849861+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:33.850009+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 2146304 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:34.850207+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:35.850648+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:36.850865+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:37.851088+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:38.851396+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:39.851602+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:40.851880+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:41.852119+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:42.852293+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:43.852454+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:44.852658+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:45.852790+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:46.852956+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:47.853150+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:48.853370+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc9ea000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.735519409s of 53.881134033s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:49.853492+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 2138112 heap: 86990848 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15dc00 session 0x5613f9dd34a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90400 session 0x5613faef6960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:50.853606+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 3096576 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:51.853679+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:52.853881+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:53.854002+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:54.854161+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:55.854314+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:56.854441+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983446 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:57.854591+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:58.854784+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:52:59.854931+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:00.855085+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.116690636s of 11.485289574s, submitted: 369
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:01.855222+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983578 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:02.855347+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:03.855468+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:04.855675+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:05.855834+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:06.855979+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986602 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:07.856104+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:08.856310+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:09.856483+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:10.856714+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:11.856820+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986602 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:12.857015+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:13.857148+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:14.857265+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:15.857402+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:16.857530+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986602 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.116033554s of 17.130371094s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:18.424537+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:19.424679+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:20.424801+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91400 session 0x5613fc6c3680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91000 session 0x5613fc02ab40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:21.425010+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:22.425180+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986470 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:23.425295+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:24.425431+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:25.425752+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:26.425891+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:27.426032+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986470 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:28.426190+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:29.426365+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:30.426550+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:31.426715+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:32.426871+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986470 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:33.427028+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90000 session 0x5613f9cb6960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:34.427162+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f88ad400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.425667763s of 16.111967087s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:35.427322+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:36.427454+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:37.427647+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986602 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:38.427809+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:39.428141+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 3080192 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:40.428306+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:41.428451+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:42.428599+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986602 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:43.428764+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:44.428958+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.142539978s of 10.148192406s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:45.429134+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:46.429256+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:47.429428+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986602 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:48.429576+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:49.429776+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:50.429927+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:51.430075+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:52.430248+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985420 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:53.430372+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:54.430509+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:55.430676+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:56.430838+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.124277115s of 12.140081406s, submitted: 4
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:57.431167+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985420 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:58.431312+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:53:59.431514+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:00.431675+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:01.431884+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:02.432059+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:03.432203+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:04.432336+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:05.432559+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:06.432686+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:07.432858+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:08.433021+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:09.433242+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:10.433375+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:11.433566+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:12.433705+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:13.433839+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:14.434017+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:15.434145+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:16.434345+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:17.434551+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:18.434761+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:19.434953+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:20.435135+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:21.435290+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:22.435470+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:23.435646+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:24.435779+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:25.435892+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:26.436041+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:27.436180+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:28.436354+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:29.436552+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:30.436683+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:31.436810+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:32.436974+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:33.437214+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:34.437400+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:35.437674+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:36.437924+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:37.438116+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:38.438296+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:39.438509+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:40.438721+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:41.438932+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:42.439317+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:43.439536+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:44.439823+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:45.440112+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:46.440360+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:47.440703+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:48.440896+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:49.441055+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:50.441267+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:51.441398+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:52.441555+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:53.441728+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:54.441956+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:55.442106+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:56.442245+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:57.442385+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:58.442557+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:54:59.442692+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:00.442891+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:01.443054+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:02.443239+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:03.443410+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:04.443641+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:05.443794+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:06.443914+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:07.444088+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:08.444247+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:09.444424+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:10.444581+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:11.444684+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:12.444832+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:13.445010+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:14.445182+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:15.445309+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:16.445450+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:17.445668+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:18.445844+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:19.446032+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:20.446179+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:21.446320+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:22.446513+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:23.446693+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:24.446839+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15dc00 session 0x5613fc02bc20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f88ad400 session 0x5613fc02b680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:25.446986+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:26.447135+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:27.447323+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:28.447474+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:29.447674+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:30.447822+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:31.447969+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:32.448138+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985288 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:33.448244+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:34.448363+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90c00 session 0x5613fc02a000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90400 session 0x5613fc0105a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:35.448502+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f88ad400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.419281006s of 99.498825073s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:36.448670+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:37.448816+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985420 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:38.448976+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:39.449173+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:40.449312+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:41.449497+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:42.449663+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985420 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:43.449777+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:44.449918+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:45.450051+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:46.450179+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:47.450314+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985552 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:48.450458+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:49.450815+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.208108902s of 13.214679718s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:50.450931+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:51.451105+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:52.451241+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986932 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:53.451384+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:54.451543+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:55.451681+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:56.451813+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:57.451996+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986932 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:58.452124+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:55:59.452309+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.133431435s of 10.143626213s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:00.453277+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:01.453430+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:02.453579+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:03.453730+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:04.453847+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:05.453969+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:06.454157+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:07.454308+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:08.454445+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:09.454597+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:10.454748+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:11.454901+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:12.455035+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:13.455168+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:14.455322+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:15.455453+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:16.455644+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:17.455822+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:18.455958+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:19.456129+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:20.456280+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:21.456418+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:22.456646+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:23.456815+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:24.456970+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:25.457142+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:26.457285+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:27.457441+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:28.457601+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:29.457901+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:30.458109+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:31.458253+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:32.458444+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:33.458586+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:34.458772+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:35.458895+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:36.459030+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:37.459183+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:38.459337+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:39.459506+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:40.459636+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:41.459789+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15d800 session 0x5613fc3e65a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f9e95c00 session 0x5613f9068f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:42.459922+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:43.460130+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:44.460332+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:45.460517+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:46.460668+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:47.460805+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:48.460935+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:49.461122+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:50.461282+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:51.461461+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:52.461594+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.599273682s of 53.603805542s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:53.461769+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:54.461898+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:55.462011+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:56.462146+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:57.462335+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988444 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:58.462454+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:59.462779+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:00.462974+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:01.463115+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:02.463293+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987853 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:03.463457+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:04.463610+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:05.463837+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:06.463983+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.491450310s of 13.503188133s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:07.464119+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:08.464356+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:09.464566+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:10.464749+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:11.464893+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:12.465060+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:13.465209+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:14.465392+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:15.465564+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:16.465684+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:17.465832+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:18.465995+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:19.466163+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:20.466341+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:21.466496+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:22.466727+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:23.466865+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:24.467039+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:25.467159+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:26.467311+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:27.467468+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:28.467605+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:29.467843+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:30.468335+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:31.468499+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:32.468650+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:33.468792+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91000 session 0x5613faef7860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:34.468935+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:35.469093+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:36.469247+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:37.469373+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:38.469504+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:39.469761+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:40.469954+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:41.470184+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:42.470329+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:43.470519+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:44.470678+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.201156616s of 38.223014832s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:45.470823+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:46.471008+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:47.471179+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989365 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:48.471313+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:49.471524+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:50.471673+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:51.471811+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:52.471936+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989365 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:53.472076+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:54.472197+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:55.472329+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:56.472464+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:57.472652+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.066782951s of 13.264513969s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988642 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:58.472789+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:59.472951+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:00.473074+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:01.473207+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:02.473349+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:03.473495+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988642 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:04.473637+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:05.473765+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:06.474228+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:07.474514+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:08.474672+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988642 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:09.474925+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:10.475056+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:11.475228+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:12.475358+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:13.475493+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988642 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90000 session 0x5613fb411a40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f88ad400 session 0x5613fc48a780
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:14.475700+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:15.475845+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91800 session 0x5613fb4ef0e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:16.476050+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:17.476178+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:18.476301+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988642 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:19.476475+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:20.476639+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:21.476817+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:22.476993+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:23.477135+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988642 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:24.477275+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.754665375s of 26.779539108s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:25.477425+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:26.477578+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:27.477691+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:28.477807+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990418 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:29.477968+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:30.478095+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:31.478248+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:32.478443+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:33.478585+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990418 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:34.478759+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:35.478896+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.936524391s of 11.331603050s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:36.479036+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:37.479242+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:38.479423+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989827 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:39.479656+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:40.479827+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:41.479989+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:42.480124+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:43.480312+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989695 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:44.480563+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:45.480782+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:46.480929+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:47.481073+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:48.481210+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989563 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:49.481394+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:50.481595+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:51.481803+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91c00 session 0x5613faef6780
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15dc00 session 0x5613fc48ad20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:52.481948+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:53.482085+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989563 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:54.482262+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:55.482393+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15d800 session 0x5613f9cb6960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:56.482529+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:57.482702+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:58.482873+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989563 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:59.483038+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:00.483191+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:01.483388+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:02.483534+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f88ad400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.618913651s of 26.631462097s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:03.483653+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989695 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:04.483803+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:05.483970+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:06.484122+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 1957888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:07.484250+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:08.484384+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991339 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:09.484538+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:10.484756+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:11.484918+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:12.485065+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:13.485204+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 2998272 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991339 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:14.485427+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.062094688s of 12.079286575s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:15.485588+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:16.485681+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:17.485892+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:18.486058+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993772 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:19.486350+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:20.486544+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:21.486696+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:22.486812+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:23.486967+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992917 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:24.487139+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:25.487262+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:26.487418+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:27.487648+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:28.487765+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992917 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:29.487938+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:30.488078+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:31.488213+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:32.488352+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15d800 session 0x5613fa1fe3c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba0d860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:33.488491+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992917 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:34.488698+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:35.488860+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:36.488980+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:37.489132+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:38.489274+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992917 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:39.489447+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:40.489772+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:41.489981+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:42.490106+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.302938461s of 28.331325531s, submitted: 7
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:43.490263+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993049 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:44.490402+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:45.490580+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:46.490714+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:47.490904+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:48.491041+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996073 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:49.491182+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:50.491316+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:51.491457+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:52.491589+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:53.491684+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996994 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:54.491817+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:55.491933+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.060354233s of 12.191138268s, submitted: 5
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:56.492052+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:57.492177+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:58.492387+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996403 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:59.492604+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:00.492799+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:01.492988+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:02.493187+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:03.493342+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996271 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:04.493494+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90400 session 0x5613fc49c3c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90000 session 0x5613fac70b40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:05.493650+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:06.493773+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:07.494099+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:08.494269+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996271 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread fragmentation_score=0.000032 took=0.000055s
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:09.494481+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:10.494743+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:11.494942+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:12.495136+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:13.495302+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996271 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 9124 writes, 36K keys, 9124 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9124 writes, 2040 syncs, 4.47 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 793 writes, 1304 keys, 793 commit groups, 1.0 writes per commit group, ingest: 0.44 MB, 0.00 MB/s
                                           Interval WAL: 793 writes, 362 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:14.495486+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:15.495697+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.060838699s of 20.067579269s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:16.496732+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:17.496951+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:18.497131+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996403 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:19.497391+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 2916352 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:20.497605+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 2916352 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:21.497774+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b2000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:22.497954+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:23.498160+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996403 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:24.498317+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,1])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:25.498490+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:26.498719+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:27.498910+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.003846169s of 12.153634071s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:28.499037+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:29.499236+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:30.499371+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:31.499564+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:32.499738+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:33.499887+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:34.500026+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:35.500179+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:36.500294+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fc4b2000 session 0x5613fc49c960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91000 session 0x5613fa1fed20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:37.500446+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:38.500598+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:39.500792+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:40.500987+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:41.501132+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:42.501286+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:43.501434+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:44.501672+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:45.501811+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:46.501959+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.567741394s of 19.574918747s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:47.502148+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:48.502300+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994630 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:49.502548+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:50.502697+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:51.503120+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:52.503281+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:53.503476+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994630 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:54.503679+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:55.503852+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:56.503973+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:57.504113+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:58.504309+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994630 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:59.504488+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:00.504699+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.260245323s of 13.263541222s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:01.504852+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91c00 session 0x5613fb685860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15dc00 session 0x5613fb5c90e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:02.505023+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:03.505197+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:04.505400+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:05.505533+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:06.505705+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:07.505867+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:08.506014+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:09.506192+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:10.506349+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:11.506482+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:12.506638+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:13.506802+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:14.506975+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:15.507155+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:16.507325+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.454084396s of 16.458341599s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:17.507746+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,1,0,0,0,0,2])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:18.507898+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994630 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:19.508055+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:20.508219+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:21.508419+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:22.508599+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:23.508784+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994630 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:24.508919+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:25.509139+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:26.509319+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:27.509559+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:28.509893+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996010 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:29.510169+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:30.510343+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:31.510549+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91800 session 0x5613fc214b40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f88ad400 session 0x5613faef6960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:32.510678+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:33.510823+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996010 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:34.510962+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:35.511174+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:36.511392+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:37.511581+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:38.511767+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996010 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:39.512754+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:40.512886+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:41.514004+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:42.515383+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f88ad400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.819530487s of 25.665802002s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:43.515545+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996142 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:44.516708+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:45.517098+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:46.517961+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f9e95c00 session 0x5613f9069680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:47.518177+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:48.518686+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997654 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86212608 unmapped: 1826816 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:49.518882+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86212608 unmapped: 1826816 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:50.519389+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:51.519814+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:52.520238+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:53.520426+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997654 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:54.520766+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:55.521226+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:56.521567+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.521481514s of 14.528515816s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:57.521959+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 1810432 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:58.522389+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997654 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 1810432 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:59.522730+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:00.522941+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:01.523120+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:02.523332+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:03.523534+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997063 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:04.523702+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:05.523922+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:06.524148+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:07.524373+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86245376 unmapped: 1794048 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:08.524574+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86245376 unmapped: 1794048 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997063 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.129097939s of 12.140315056s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:09.524748+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:10.525072+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15d800 session 0x5613fc49d860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90000 session 0x5613faef7c20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:11.525219+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:12.525407+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:13.525729+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996340 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:14.525910+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:15.526248+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:16.526705+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:17.526949+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:18.527280+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996340 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:19.527613+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:20.527789+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:21.527993+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.481614113s of 12.492132187s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:22.528284+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:23.528532+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996472 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:24.528685+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:25.528890+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:26.529091+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:27.529297+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:28.529490+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997984 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:29.529662+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:30.529794+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:31.529964+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:32.530116+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:33.530290+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:34.530531+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997984 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:35.530685+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:36.530878+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.158850670s of 15.166591644s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:37.531005+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:38.531156+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91000 session 0x5613fb495680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91800 session 0x5613fc6c32c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:39.531390+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997852 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:40.531509+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:41.531651+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:42.531838+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:43.531971+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:44.532157+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997852 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:45.532308+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:46.532459+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:47.532727+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:48.532879+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:49.533087+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997852 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.706581116s of 12.710232735s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:50.533226+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 1687552 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:51.533377+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 1253376 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:52.533777+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 1253376 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:53.533944+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 1253376 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:54.534163+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999496 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 1253376 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:55.534308+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 1253376 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:56.534494+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87842816 unmapped: 1245184 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:57.534648+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87842816 unmapped: 1245184 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:58.534820+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87842816 unmapped: 1245184 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:59.535052+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999496 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:00.535249+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:01.535406+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.950714111s of 12.075445175s, submitted: 382
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91c00 session 0x5613fc48ad20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:02.535573+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:03.535738+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:04.535899+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998905 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:05.536056+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:06.536235+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:07.536408+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:08.536630+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:09.536813+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998773 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:10.537041+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:11.537243+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:12.537394+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.236394882s of 11.243492126s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:13.537573+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:14.537777+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998905 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:15.537974+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87867392 unmapped: 1220608 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:16.538175+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87867392 unmapped: 1220608 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:17.538325+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87875584 unmapped: 1212416 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:18.538512+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87875584 unmapped: 1212416 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:19.538748+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998905 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87875584 unmapped: 1212416 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:20.538942+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87875584 unmapped: 1212416 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:21.539124+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87875584 unmapped: 1212416 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:22.539248+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87875584 unmapped: 1212416 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.111818314s of 10.114892006s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:23.539440+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:24.539613+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998314 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:25.539848+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:26.540031+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:27.540259+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:28.540435+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:29.540613+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998182 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:30.540767+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:31.540905+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:32.541115+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:33.541296+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:34.541470+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998182 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:35.541736+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:36.542002+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:37.542211+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:38.542343+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:39.542537+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998182 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:40.542806+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc51b4a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91800 session 0x5613f9f3ef00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:41.542987+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:42.543202+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:43.543379+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:44.543551+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998182 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:45.543742+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:46.543930+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:47.544104+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87900160 unmapped: 1187840 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:48.544331+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87900160 unmapped: 1187840 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:49.544610+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998182 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87900160 unmapped: 1187840 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:50.544839+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87900160 unmapped: 1187840 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:51.545023+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.453969955s of 28.460792542s, submitted: 2
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87908352 unmapped: 1179648 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:52.545205+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87908352 unmapped: 1179648 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:53.545364+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87908352 unmapped: 1179648 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:54.545516+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999826 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87908352 unmapped: 1179648 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:55.545699+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:56.545900+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:57.546096+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:58.546306+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:59.546489+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001338 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:00.546687+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:01.546883+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:02.547056+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:03.547233+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:04.547414+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.090477943s of 13.101955414s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:05.547673+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:06.547875+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:07.548030+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:08.548201+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:09.548419+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:10.548567+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:11.548759+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:12.548971+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:13.549172+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:14.549345+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:15.549732+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:16.549966+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:17.550189+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:18.550491+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:19.550829+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:20.551043+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:21.551266+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:22.551556+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:23.551726+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:24.551982+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:25.552222+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:26.552393+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:27.552593+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:28.552717+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:29.552898+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:30.553158+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:31.553351+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:32.553526+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:33.553699+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:34.553870+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:35.554044+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:36.554200+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:37.554365+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:38.554525+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:39.554676+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:40.554907+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:41.555062+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:42.555231+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:43.555399+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:44.555580+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:45.555744+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:46.556509+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:47.556822+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:48.557064+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:49.558034+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:50.558251+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:51.558460+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:52.558690+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:53.558856+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:54.559056+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:55.559256+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:56.559452+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:57.559603+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:58.559808+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:59.560045+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:00.560250+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:01.560437+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:02.560704+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:03.560868+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:04.561117+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:05.561334+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:06.561579+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:07.561764+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:08.561975+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:09.562235+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:10.562479+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:11.562714+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:12.562919+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:13.563091+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:14.563362+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:15.563570+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:16.563820+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:17.564081+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:18.564332+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:19.564610+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:20.565003+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:21.565188+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:22.565354+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:23.565484+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:24.565657+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:25.565846+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:26.566051+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:27.566189+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:28.566340+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:29.566527+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:30.566719+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:31.566872+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:32.567056+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:33.567202+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:34.567309+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:35.567379+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:36.567533+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:37.567691+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:38.567808+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:39.568015+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:40.568179+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:41.568339+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:42.568508+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:43.568717+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:44.568926+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:45.569075+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:46.569212+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:47.569345+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:48.569525+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:49.569757+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:50.569959+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:51.570133+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:52.570267+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:53.570416+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:54.570558+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:55.570768+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:56.570966+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:57.571207+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:58.571397+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:59.571550+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:00.571707+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:01.571892+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15dc00 session 0x5613fa1fe5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f88ad400 session 0x5613fa1fe3c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:02.572072+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _renew_subs
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 118.321998596s of 118.326072693s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:03.572229+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:04.572357+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 18882560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _renew_subs
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 151 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc756b40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070180 data_alloc: 218103808 data_used: 253952
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:05.572529+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 18882560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:06.572706+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 18989056 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 152 ms_handle_reset con 0x5613fb15dc00 session 0x5613fc756960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:07.572869+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:08.572964+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:09.573146+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0x97d589/0xa41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074835 data_alloc: 218103808 data_used: 262144
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:10.573316+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:11.573503+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0x97d589/0xa41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:12.573637+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 ms_handle_reset con 0x5613fbe91000 session 0x5613fb686780
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 ms_handle_reset con 0x5613fbe90000 session 0x5613fc2abc20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:13.573774+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:14.573902+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077725 data_alloc: 218103808 data_used: 262144
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:15.574060+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc7000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.536023140s of 12.729516029s, submitted: 74
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:16.574218+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:17.574863+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:18.575008+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:19.575236+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079909 data_alloc: 218103808 data_used: 262144
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:20.575406+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:21.575653+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:22.575847+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:23.575966+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b2400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:24.576153+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080041 data_alloc: 218103808 data_used: 262144
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:25.576318+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:26.576556+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:27.576758+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.903751373s of 11.915815353s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:28.576884+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:29.577122+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b2800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079318 data_alloc: 218103808 data_used: 262144
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:30.577330+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:31.577483+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:32.577689+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:33.577778+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:34.577937+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079318 data_alloc: 218103808 data_used: 262144
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:35.578113+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:36.578240+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:37.578361+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:38.578554+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:39.578845+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078004 data_alloc: 218103808 data_used: 262144
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:40.578970+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:41.579174+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b2c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 ms_handle_reset con 0x5613fc4b2c00 session 0x5613fc7572c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc7574a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:42.579375+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 95715328 unmapped: 11206656 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:43.579535+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 11190272 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:44.579716+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.819295883s of 16.843923569s, submitted: 6
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _renew_subs
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 154 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 11141120 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103250 data_alloc: 218103808 data_used: 7086080
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:45.579911+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _renew_subs
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 11141120 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fb15dc00 session 0x5613fc757860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fbe90000 session 0x5613fc756780
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fbe91000 session 0x5613f88c0f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fc4b3000 session 0x5613fc3e65a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb2cbe00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb684000/0x0/0x4ffc00000, data 0x10be7fc/0x1187000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:46.580054+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 10690560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb684000/0x0/0x4ffc00000, data 0x10be7fc/0x1187000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:47.580302+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 10690560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fb15dc00 session 0x5613fc73cf00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:48.580466+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 10690560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:49.580647+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 10690560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fbe90000 session 0x5613f95592c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb684000/0x0/0x4ffc00000, data 0x10be7fc/0x1187000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165908 data_alloc: 218103808 data_used: 7086080
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:50.580798+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fbe91000 session 0x5613fc49da40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 10690560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fc4b3000 session 0x5613fba0e000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb684000/0x0/0x4ffc00000, data 0x10be7fc/0x1187000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:51.580904+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 10641408 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:52.581102+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 10641408 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _renew_subs
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:53.581369+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 99614720 unmapped: 7307264 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:54.581545+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217838 data_alloc: 234881024 data_used: 14565376
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:55.581703+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x10c07ce/0x118a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:56.581868+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:57.582003+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:58.582152+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:59.582386+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217838 data_alloc: 234881024 data_used: 14565376
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:00.582565+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x10c07ce/0x118a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:01.582724+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:02.582926+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:03.583133+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.420452118s of 19.636718750s, submitted: 77
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x10c07ce/0x118a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:04.583281+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 1564672 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293890 data_alloc: 234881024 data_used: 15286272
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:05.583448+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 1458176 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:06.583677+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108683264 unmapped: 3481600 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:07.583867+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108683264 unmapped: 3481600 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:08.584033+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108683264 unmapped: 3481600 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:09.584246+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 3448832 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ca3000/0x0/0x4ffc00000, data 0x18ff7ce/0x19c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297322 data_alloc: 234881024 data_used: 15429632
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:10.584419+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 3448832 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:11.584668+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 3448832 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:12.584842+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 3407872 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:13.584996+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 3407872 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ca3000/0x0/0x4ffc00000, data 0x18ff7ce/0x19c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:14.585168+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 3391488 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298250 data_alloc: 234881024 data_used: 15499264
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:15.585332+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 3391488 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:16.585731+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:17.585893+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 3391488 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:18.586550+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 3391488 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:19.586822+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 3391488 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ca3000/0x0/0x4ffc00000, data 0x18ff7ce/0x19c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298250 data_alloc: 234881024 data_used: 15499264
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:20.586964+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108806144 unmapped: 3358720 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:21.587138+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108806144 unmapped: 3358720 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ca3000/0x0/0x4ffc00000, data 0x18ff7ce/0x19c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fc2aad20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:22.587274+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fc2ab0e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91800 session 0x5613fba0e1e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 2121728 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91000 session 0x5613fc3e6f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:23.587608+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.168640137s of 19.375652313s, submitted: 87
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 10330112 heap: 126861312 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3400 session 0x5613fb5c92c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fc51ad20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fb2ca5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91000 session 0x5613fb4eeb40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91800 session 0x5613fc011c20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:24.587805+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 19750912 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:25.588044+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405947 data_alloc: 234881024 data_used: 16031744
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 19750912 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:26.588268+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 19750912 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:27.588424+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 19750912 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e76000/0x0/0x4ffc00000, data 0x272b830/0x27f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:28.588580+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 19718144 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:29.588830+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 19718144 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3800 session 0x5613fc49d860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:30.589021+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406308 data_alloc: 234881024 data_used: 16031744
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 19701760 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:31.589174+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 19693568 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:32.589345+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e76000/0x0/0x4ffc00000, data 0x272b830/0x27f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [1])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 14237696 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:33.589507+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 5464064 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:34.589696+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 5464064 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:35.589872+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1506132 data_alloc: 251658240 data_used: 30793728
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 5464064 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:36.590032+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.878297806s of 13.041009903s, submitted: 42
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e76000/0x0/0x4ffc00000, data 0x272b830/0x27f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 5390336 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:37.590198+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 5390336 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:38.590387+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 5390336 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:39.590594+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 5390336 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:40.590827+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1507948 data_alloc: 251658240 data_used: 30793728
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 5390336 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e74000/0x0/0x4ffc00000, data 0x272c830/0x27f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:41.591019+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 5357568 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:42.591158+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 5357568 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:43.591269+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 5357568 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e74000/0x0/0x4ffc00000, data 0x272c830/0x27f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [1])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:44.591511+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 1081344 heap: 133685248 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:45.591706+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1622320 data_alloc: 251658240 data_used: 31846400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 1597440 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:46.591826+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8199000/0x0/0x4ffc00000, data 0x3408830/0x34d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 1597440 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:47.591991+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.790105820s of 11.055186272s, submitted: 128
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 1597440 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:48.592220+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 133144576 unmapped: 1589248 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:49.592471+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 133144576 unmapped: 1589248 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:50.592679+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1615332 data_alloc: 251658240 data_used: 32133120
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 3432448 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f818d000/0x0/0x4ffc00000, data 0x3414830/0x34df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:51.592883+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 3432448 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:52.593056+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 3432448 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:53.593257+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 3432448 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:54.593501+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 3432448 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613f9f3d860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fc6c2f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:55.593704+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3c00 session 0x5613fc49c5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315218 data_alloc: 234881024 data_used: 16031744
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 15499264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f98a1000/0x0/0x4ffc00000, data 0x19007ce/0x19ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:56.593840+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 15499264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:57.593996+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f98a1000/0x0/0x4ffc00000, data 0x19007ce/0x19ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 15499264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:58.594139+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 15499264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:59.594341+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 15499264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:00.594481+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f98a1000/0x0/0x4ffc00000, data 0x19007ce/0x19ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315218 data_alloc: 234881024 data_used: 16031744
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.011151314s of 13.162199974s, submitted: 50
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba0e780
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 15499264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15dc00 session 0x5613f9f3e780
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:01.594709+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb688780
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:02.594848+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:03.594979+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:04.595132+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:05.595269+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137852 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:06.595423+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:07.595740+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:08.595933+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:09.596131+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:10.596299+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137852 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:11.596469+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:12.596602+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:13.596796+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:14.596958+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:15.597103+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137852 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:16.597243+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:17.597375+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:18.597490+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:19.597700+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:20.597903+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137852 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:21.598089+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:22.598267+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:23.598405+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:24.598570+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:25.598756+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137852 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.815328598s of 24.952289581s, submitted: 44
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fc6c34a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 37666816 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613f9cb6b40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3c00 session 0x5613fb4941e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc107400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc107400 session 0x5613fc42d0e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb2cad20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:26.598952+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9dae000/0x0/0x4ffc00000, data 0x17f576c/0x18be000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 37666816 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b2800 session 0x5613fa24d680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b2400 session 0x5613fb410d20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:27.599122+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 37666816 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:28.599303+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 37666816 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:29.599506+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 37666816 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fac71860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:30.599696+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244374 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 37363712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x181976c/0x18e2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:31.599830+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x181976c/0x18e2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 37486592 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:32.599979+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:33.600194+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:34.600356+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:35.600526+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339374 data_alloc: 234881024 data_used: 21630976
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:36.600669+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:37.600890+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x181976c/0x18e2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.176795006s of 12.298893929s, submitted: 14
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:38.601031+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:39.601170+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x181976c/0x18e2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:40.601301+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339506 data_alloc: 234881024 data_used: 21630976
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x181976c/0x18e2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:41.601528+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:42.601673+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 30711808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:43.601816+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 28336128 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:44.601973+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f970e000/0x0/0x4ffc00000, data 0x1e9576c/0x1f5e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 27901952 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:45.602140+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410990 data_alloc: 234881024 data_used: 22097920
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 27901952 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:46.602322+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96dc000/0x0/0x4ffc00000, data 0x1ec676c/0x1f8f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 27901952 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:47.602451+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 27901952 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:48.602601+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 27901952 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:49.602838+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.586152077s of 11.780480385s, submitted: 70
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:50.603013+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407087 data_alloc: 234881024 data_used: 22097920
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:51.603276+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:52.603430+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:53.603559+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:54.603720+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:55.603910+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406955 data_alloc: 234881024 data_used: 22097920
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:56.604113+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:57.604293+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:58.604475+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:59.604691+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:00.605147+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406955 data_alloc: 234881024 data_used: 22097920
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:01.605310+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:02.605466+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91c00 session 0x5613f9069860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613fc73c5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:03.605673+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:04.605874+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:05.606032+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406955 data_alloc: 234881024 data_used: 22097920
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fb4103c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3c00 session 0x5613fba123c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121716736 unmapped: 28893184 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.448675156s of 16.463441849s, submitted: 4
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:06.606172+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc73cf00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:07.606303+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:08.606474+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:09.606651+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:10.606813+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152945 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:11.606941+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:12.607099+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:13.607255+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:14.607411+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:15.607675+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153077 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:16.607864+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b2400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.148706436s of 10.198718071s, submitted: 22
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:17.607999+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:18.620009+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:19.620214+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b2800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:20.620459+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154589 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:21.620761+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:22.620976+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:23.621156+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:24.621333+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:25.621467+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153407 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:26.621575+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:27.621728+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.545178413s of 11.556472778s, submitted: 3
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 38510592 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:28.621866+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 38510592 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:29.622038+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 38510592 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:30.622215+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153275 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 38510592 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:31.622344+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32c000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32c000 session 0x5613fc3e6000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc3e7680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613fc2aba40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fc2aa5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [1])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3c00 session 0x5613fc2ab4a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc107c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc107c00 session 0x5613fb4114a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc2aa960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38469632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613f95592c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fc48b680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:32.622480+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38469632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:33.622679+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38469632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:34.622822+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38469632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:35.623003+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa107000/0x0/0x4ffc00000, data 0x108b77c/0x1155000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206494 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38469632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:36.623141+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38469632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:37.623286+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3c00 session 0x5613fb410000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa107000/0x0/0x4ffc00000, data 0x108b77c/0x1155000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 38453248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:38.623461+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112345088 unmapped: 38264832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:39.623682+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa106000/0x0/0x4ffc00000, data 0x108b79f/0x1156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:40.623968+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259412 data_alloc: 234881024 data_used: 14983168
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa106000/0x0/0x4ffc00000, data 0x108b79f/0x1156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:41.624385+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:42.624558+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa106000/0x0/0x4ffc00000, data 0x108b79f/0x1156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:43.624930+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:44.625254+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa106000/0x0/0x4ffc00000, data 0x108b79f/0x1156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:45.625548+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259412 data_alloc: 234881024 data_used: 14983168
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:46.625742+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:47.626312+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:48.626588+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:49.626894+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.326250076s of 21.455661774s, submitted: 29
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 35045376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:50.627165+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293924 data_alloc: 234881024 data_used: 15011840
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9df0000/0x0/0x4ffc00000, data 0x139379f/0x145e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 33480704 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:51.627377+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 33480704 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:52.627537+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 33480704 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:53.627683+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33382400 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:54.627816+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b2400 session 0x5613fc3e65a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fb4945a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33382400 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:55.628026+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293924 data_alloc: 234881024 data_used: 15011840
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:56.628262+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9df0000/0x0/0x4ffc00000, data 0x139379f/0x145e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:57.628464+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:58.628704+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9dfb000/0x0/0x4ffc00000, data 0x139679f/0x1461000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:59.628936+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:00.629114+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288748 data_alloc: 234881024 data_used: 15011840
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:01.629256+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:02.629397+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.119906425s of 13.254473686s, submitted: 55
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 33357824 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:03.629562+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 33357824 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:04.629715+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9dfa000/0x0/0x4ffc00000, data 0x139779f/0x1462000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 33357824 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:05.629882+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288972 data_alloc: 234881024 data_used: 15011840
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 33357824 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:06.630003+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 33357824 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:07.630157+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9dfa000/0x0/0x4ffc00000, data 0x139779f/0x1462000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613f9162960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613f91634a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613f9163c20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3c00 session 0x5613fc7563c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc757e00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fc7561e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613fc757680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fb2ca5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 33955840 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb0c7400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb0c7400 session 0x5613fb2ca000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:08.630309+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 33955840 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:09.630505+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 33955840 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:10.630700+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313641 data_alloc: 234881024 data_used: 15011840
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 33955840 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:11.630866+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b66000/0x0/0x4ffc00000, data 0x1629810/0x16f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 33955840 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:12.631055+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 33947648 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:13.631357+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.039366722s of 11.124565125s, submitted: 25
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 2845 syncs, 3.89 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1953 writes, 6016 keys, 1953 commit groups, 1.0 writes per commit group, ingest: 5.65 MB, 0.01 MB/s
                                           Interval WAL: 1953 writes, 805 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba123c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 33947648 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:14.631548+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b65000/0x0/0x4ffc00000, data 0x1629833/0x16f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 33947648 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:15.631814+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323446 data_alloc: 234881024 data_used: 16302080
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 32587776 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:16.631956+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:17.632192+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:18.632368+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:19.632538+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b65000/0x0/0x4ffc00000, data 0x1629833/0x16f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:20.632738+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333174 data_alloc: 234881024 data_used: 17711104
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:21.632888+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:22.633005+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b64000/0x0/0x4ffc00000, data 0x1629833/0x16f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:23.633219+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:24.633562+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b64000/0x0/0x4ffc00000, data 0x1629833/0x16f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:25.633744+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333298 data_alloc: 234881024 data_used: 17715200
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.484905243s of 12.509075165s, submitted: 7
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:26.633881+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119447552 unmapped: 31162368 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:27.634066+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119455744 unmapped: 31154176 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:28.634227+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119488512 unmapped: 31121408 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:29.634458+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119488512 unmapped: 31121408 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:30.634641+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1991833/0x1a5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360756 data_alloc: 234881024 data_used: 17846272
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119496704 unmapped: 31113216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:31.634815+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119496704 unmapped: 31113216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:32.634984+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1991833/0x1a5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119496704 unmapped: 31113216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:33.635180+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119496704 unmapped: 31113216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:34.635316+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119496704 unmapped: 31113216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:35.635480+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1991833/0x1a5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fba13c20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613fb2cbe00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360756 data_alloc: 234881024 data_used: 17846272
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 32137216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.854346275s of 10.019863129s, submitted: 62
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:36.635695+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fc73d2c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 32038912 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:37.635959+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 32038912 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:38.636179+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 32038912 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:39.636440+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 32038912 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:40.636776+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fb410d20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32dc00 session 0x5613fc3e61e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293708 data_alloc: 234881024 data_used: 14999552
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9df8000/0x0/0x4ffc00000, data 0x139779f/0x1462000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 32030720 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:41.636958+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb4ee5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 35586048 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:42.637122+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 35586048 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:43.637352+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 35586048 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:44.637550+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:45.637741+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169636 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:46.637858+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:47.638034+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:48.638313+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:49.638605+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:50.638956+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169636 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:51.639141+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:52.639375+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 35569664 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:53.639575+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 35569664 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:54.639777+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 35569664 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:55.640063+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169636 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 35569664 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:56.640301+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 35569664 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:57.640545+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 35569664 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:58.640773+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:59.641012+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:00.641218+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169636 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:01.641382+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:02.641537+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:03.641699+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:04.641869+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:05.642051+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169636 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:06.642192+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.730876923s of 30.943260193s, submitted: 83
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:07.642335+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fc7563c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 35553280 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fb4114a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613fc2ab4a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc0103c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fb689e00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x110576c/0x11ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:08.642502+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 35602432 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:09.642701+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 35602432 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x110576c/0x11ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:10.642903+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 35602432 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223983 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:11.643119+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 35602432 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:12.643265+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 35602432 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fc49c5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x110576c/0x11ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:13.643418+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 35602432 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:14.643592+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:15.643725+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x110576c/0x11ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277163 data_alloc: 234881024 data_used: 14954496
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:16.643981+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:17.644174+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:18.644327+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:19.644666+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:20.644886+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x110576c/0x11ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277163 data_alloc: 234881024 data_used: 14954496
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:21.645071+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x110576c/0x11ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:22.645311+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:23.645557+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:24.645787+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.286369324s of 17.366155624s, submitted: 16
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:25.645999+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 31776768 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322133 data_alloc: 234881024 data_used: 14954496
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:26.646209+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b6e000/0x0/0x4ffc00000, data 0x162576c/0x16ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:27.646462+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:28.646737+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:29.647007+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:30.647192+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322133 data_alloc: 234881024 data_used: 14954496
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:31.647412+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b62000/0x0/0x4ffc00000, data 0x163176c/0x16fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:32.647606+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:33.647809+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b62000/0x0/0x4ffc00000, data 0x163176c/0x16fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:34.648032+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:35.648188+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b62000/0x0/0x4ffc00000, data 0x163176c/0x16fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322133 data_alloc: 234881024 data_used: 14954496
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:36.648378+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:37.648603+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:38.648989+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:39.649206+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:40.649396+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322133 data_alloc: 234881024 data_used: 14954496
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:41.649561+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b62000/0x0/0x4ffc00000, data 0x163176c/0x16fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:42.649757+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:43.650013+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:44.650266+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b62000/0x0/0x4ffc00000, data 0x163176c/0x16fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:45.650571+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322133 data_alloc: 234881024 data_used: 14954496
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:46.650709+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:47.650828+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:48.651023+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:49.651242+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b62000/0x0/0x4ffc00000, data 0x163176c/0x16fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:50.651456+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322133 data_alloc: 234881024 data_used: 14954496
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:51.651709+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 31719424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:52.651858+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 31719424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb4a3400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb4a3400 session 0x5613fba0d860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc101800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc101800 session 0x5613faef61e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613faef7860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fc6c2b40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.623338699s of 28.730890274s, submitted: 32
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:53.651974+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fc6c34a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb4a3400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118685696 unmapped: 31924224 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb4a3400 session 0x5613fa1fe960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4bc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4bc00 session 0x5613fb686f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb686b40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4bc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4bc00 session 0x5613fb4ee1e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:54.652116+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 31866880 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:55.652267+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f911f000/0x0/0x4ffc00000, data 0x20737ce/0x213d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 31850496 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:56.652405+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404172 data_alloc: 234881024 data_used: 14954496
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 31850496 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:57.652584+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 31850496 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fb4ef2c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:58.652770+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 31850496 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fba0fa40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:59.652992+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 31850496 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb4a3400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb4a3400 session 0x5613fb410f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613f90683c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:00.653234+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118284288 unmapped: 32325632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f911d000/0x0/0x4ffc00000, data 0x20747f1/0x213f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4bc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:01.653437+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405934 data_alloc: 234881024 data_used: 14954496
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 32309248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:02.653681+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118448128 unmapped: 32161792 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f911d000/0x0/0x4ffc00000, data 0x20747f1/0x213f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:03.653870+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 25501696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:04.654067+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 25501696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:05.654292+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 25501696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:06.654430+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1477374 data_alloc: 234881024 data_used: 25419776
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 25501696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f911d000/0x0/0x4ffc00000, data 0x20747f1/0x213f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:07.654601+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 25468928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:08.654807+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 25468928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:09.655046+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 25468928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:10.655254+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f911d000/0x0/0x4ffc00000, data 0x20747f1/0x213f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 25436160 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.346025467s of 17.504379272s, submitted: 45
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:11.655427+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1477374 data_alloc: 234881024 data_used: 25419776
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 25436160 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:12.655709+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 25436160 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:13.655869+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 19873792 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:14.656047+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 20783104 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:15.656202+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130203648 unmapped: 20406272 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:16.656366+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572954 data_alloc: 234881024 data_used: 25776128
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x2c4a7f1/0x2d15000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 20373504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:17.656519+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 20373504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:18.656785+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 20373504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:19.656997+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 20373504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:20.657165+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:21.657336+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568026 data_alloc: 234881024 data_used: 25780224
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8526000/0x0/0x4ffc00000, data 0x2c6b7f1/0x2d36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:22.657456+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:23.657586+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:24.657752+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:25.657884+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:26.658043+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8526000/0x0/0x4ffc00000, data 0x2c6b7f1/0x2d36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568026 data_alloc: 234881024 data_used: 25780224
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.940172195s of 16.268814087s, submitted: 140
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4bc00 session 0x5613fc3e6960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fc2aab40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:27.658264+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 27672576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fba12f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8526000/0x0/0x4ffc00000, data 0x2c6b7f1/0x2d36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:28.658454+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b2800 session 0x5613fc756960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fc3e7e00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:29.658718+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:30.658911+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90ac000/0x0/0x4ffc00000, data 0x163276c/0x16fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:31.659109+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335344 data_alloc: 234881024 data_used: 14954496
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:32.659340+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:33.659585+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:34.659770+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90ac000/0x0/0x4ffc00000, data 0x163276c/0x16fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:35.659980+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32dc00 session 0x5613fba0d4a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fba13680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc73d680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:36.660164+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192764 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:37.660316+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:38.660452+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4bc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.920195580s of 12.143515587s, submitted: 86
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:39.660715+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:40.660951+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:41.661161+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192896 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:42.661386+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:43.661734+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:44.661937+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:45.662133+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:46.662404+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192896 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:47.662604+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:48.662765+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:49.662955+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.029423714s of 11.034767151s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:50.663126+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 32808960 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:51.663272+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192968 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 32669696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:52.663501+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 32669696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:53.663679+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:54.663881+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:55.664042+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:56.664198+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192764 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:57.664391+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:58.664556+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:59.664831+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fb2cb860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613faef6960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fc49d680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fc6c2960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fba0cd20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32dc00 session 0x5613fc5ec000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc02a1e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fc02a3c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613f9069e00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:00.665010+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:01.665189+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa78d000/0x0/0x4ffc00000, data 0xa0577a/0xacf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200150 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 32653312 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:02.665371+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 32653312 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:03.665522+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 32653312 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:04.665710+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613f9f3c000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 32645120 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fa2f3000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:05.665925+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 32645120 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:06.666104+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203798 data_alloc: 218103808 data_used: 8138752
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 32636928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa78d000/0x0/0x4ffc00000, data 0xa0577a/0xacf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:07.666516+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 32636928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa78d000/0x0/0x4ffc00000, data 0xa0577a/0xacf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:08.666707+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 32636928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:09.666918+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa78d000/0x0/0x4ffc00000, data 0xa0577a/0xacf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 32636928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:10.667135+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 32628736 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:11.667311+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fb2cbc20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fa2f3000 session 0x5613fb6861e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203798 data_alloc: 218103808 data_used: 8138752
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 32628736 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa78d000/0x0/0x4ffc00000, data 0xa0577a/0xacf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.843708038s of 21.802885056s, submitted: 390
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba13860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:12.667524+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33423360 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:13.667673+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33423360 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:14.667825+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33423360 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:15.668005+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33423360 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:16.668217+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193682 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33423360 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:17.668402+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33415168 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:18.668586+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33415168 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:19.668857+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33415168 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:20.669034+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33415168 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:21.669236+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193682 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33415168 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:22.669370+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 33406976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:23.669498+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 33406976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:24.669716+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 33406976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:25.669921+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 33406976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:26.670103+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193682 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 33398784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:27.670256+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 33398784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.850473404s of 15.884275436s, submitted: 10
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fb5c9c20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613f9068f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fc73da40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc42da40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fa2f3000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fa2f3000 session 0x5613fb5c81e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:28.670425+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 33226752 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:29.670605+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 33226752 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:30.670824+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa5f8000/0x0/0x4ffc00000, data 0xb9b76c/0xc64000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 33226752 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:31.671081+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215738 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 33226752 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:32.671244+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fa1ff0e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 33218560 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:33.671430+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613f88c0f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 33218560 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fc02b0e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:34.671594+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613f9dd34a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91800 session 0x5613fb47bc20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91000 session 0x5613fc3e7860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 33177600 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fa2f3000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:35.671864+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 33177600 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa5f8000/0x0/0x4ffc00000, data 0xb9b76c/0xc64000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:36.672032+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224447 data_alloc: 218103808 data_used: 8675328
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 33177600 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:37.672199+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 33177600 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa5f8000/0x0/0x4ffc00000, data 0xb9b76c/0xc64000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:38.672406+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 33177600 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa5f8000/0x0/0x4ffc00000, data 0xb9b76c/0xc64000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:39.672578+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fa2f3000 session 0x5613fc5eda40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.628873825s of 11.685560226s, submitted: 14
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613faef61e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 33177600 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb3acb40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:40.672759+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:41.672934+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197539 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:42.673095+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:43.673265+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:44.673444+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:45.673672+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fa2f3000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:46.673865+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197671 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:47.674064+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:48.674239+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:49.674421+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:50.674599+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:51.674850+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197671 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:52.675066+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:53.675256+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:54.675414+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:55.675584+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:56.675756+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197671 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:57.675944+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.679887772s of 18.761703491s, submitted: 27
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:58.676106+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 33153024 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:59.676328+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 33153024 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:00.676532+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 33153024 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:01.676727+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197539 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 33153024 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:02.676888+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 33153024 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:03.677082+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 33153024 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:04.677251+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 33144832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:05.677445+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 33144832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:06.677599+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197539 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 33144832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:07.677788+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 33144832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:08.677966+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 33144832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:09.678191+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 33144832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:10.678377+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 33759232 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:11.678583+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197539 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 33759232 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:12.678768+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: mgrc ms_handle_reset ms_handle_reset con 0x5613f9046800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2113101694
Dec 07 10:18:56 compute-1 ceph-osd[77581]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2113101694,v1:192.168.122.100:6801/2113101694]
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: get_auth_request con 0x5613faf56400 auth_method 0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: mgrc handle_mgr_configure stats_period=5
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 33636352 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:13.678942+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faeeb000 session 0x5613f9162000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 33636352 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fafbfc00 session 0x5613fc214780
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf5dc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:14.679086+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 33636352 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:15.679280+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 33636352 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:16.679452+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fafbfc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.037788391s of 18.041795731s, submitted: 1
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fafbfc00 session 0x5613fc7561e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262910 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fc02b2c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbf33c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbf33c00 session 0x5613fc3e74a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32f400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32f400 session 0x5613fc3e61e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118349824 unmapped: 32260096 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc3e6780
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:17.679709+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:18.680054+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2e000/0x0/0x4ffc00000, data 0x126576c/0x132e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:19.680249+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:20.680466+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2e000/0x0/0x4ffc00000, data 0x126576c/0x132e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:21.680647+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262910 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fb2cbe00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:22.680786+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fafbfc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fafbfc00 session 0x5613fba12780
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbf33c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbf33c00 session 0x5613fba123c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:23.680923+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613f9dd21e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2e000/0x0/0x4ffc00000, data 0x126576c/0x132e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:24.681068+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118374400 unmapped: 32235520 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2d000/0x0/0x4ffc00000, data 0x126577c/0x132f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:25.681199+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 120963072 unmapped: 29646848 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:26.681323+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327280 data_alloc: 234881024 data_used: 16920576
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:27.681510+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:28.681678+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:29.681882+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:30.682061+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2d000/0x0/0x4ffc00000, data 0x126577c/0x132f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:31.682247+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327280 data_alloc: 234881024 data_used: 16920576
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:32.682422+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2d000/0x0/0x4ffc00000, data 0x126577c/0x132f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:33.682661+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:34.682872+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:35.683070+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.072046280s of 19.421592712s, submitted: 21
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2d000/0x0/0x4ffc00000, data 0x126577c/0x132f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 26525696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:36.683267+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361910 data_alloc: 234881024 data_used: 17256448
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x16e677c/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125386752 unmapped: 25223168 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:37.683509+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:38.683711+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a9e000/0x0/0x4ffc00000, data 0x16ec77c/0x17b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:39.683938+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a8e000/0x0/0x4ffc00000, data 0x16f677c/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a8e000/0x0/0x4ffc00000, data 0x16f677c/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:40.684156+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:41.684369+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 234881024 data_used: 17215488
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a8e000/0x0/0x4ffc00000, data 0x16f677c/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:42.684546+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:43.684728+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a8e000/0x0/0x4ffc00000, data 0x16f677c/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:44.684918+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:45.685128+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:46.685367+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 234881024 data_used: 17215488
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:47.685537+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a8e000/0x0/0x4ffc00000, data 0x16f677c/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:48.685757+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a8e000/0x0/0x4ffc00000, data 0x16f677c/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:49.686016+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:50.686228+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.644939423s of 14.761515617s, submitted: 44
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613fba12b40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba13c20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50800 session 0x5613fc73d0e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:51.686400+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205509 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:52.686601+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:53.686789+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:54.686891+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:55.687035+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:56.687217+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205509 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:57.687409+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:58.687597+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:59.687870+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:00.688004+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:01.688204+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205509 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:02.688396+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:03.688600+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:04.688783+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:05.689000+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:06.689168+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205509 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:07.689378+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:08.689521+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:09.689766+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:10.689976+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50000 session 0x5613fba0e3c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf51400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf51400 session 0x5613fba0fe00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba0f0e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50000 session 0x5613fba0e5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.314212799s of 20.416582108s, submitted: 31
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50800 session 0x5613fba0f4a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613f9069a40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc100800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 33390592 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc100800 session 0x5613fc6c2b40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc3e7680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:11.690126+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50000 session 0x5613fc73d0e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260980 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08c000/0x0/0x4ffc00000, data 0x110776c/0x11d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 33390592 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:12.690377+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 33390592 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:13.690565+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08c000/0x0/0x4ffc00000, data 0x110776c/0x11d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 33390592 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:14.690752+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 33390592 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50800 session 0x5613fc73c5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:15.690914+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613fba0e3c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 33406976 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:16.691074+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4f000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4f000 session 0x5613fba0fe00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba0e5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264291 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 33398784 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:17.691261+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08b000/0x0/0x4ffc00000, data 0x110777c/0x11d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 33382400 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:18.691453+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:19.691704+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:20.691853+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08b000/0x0/0x4ffc00000, data 0x110777c/0x11d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:21.692055+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317739 data_alloc: 234881024 data_used: 15486976
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:22.692229+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:23.692453+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08b000/0x0/0x4ffc00000, data 0x110777c/0x11d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:24.692655+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:25.692843+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:26.692980+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08b000/0x0/0x4ffc00000, data 0x110777c/0x11d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613fba0e1e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4ac00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4ac00 session 0x5613fc756f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317739 data_alloc: 234881024 data_used: 15486976
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc101400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc101400 session 0x5613fa1ff2c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9046c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9046c00 session 0x5613fba0c5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122634240 unmapped: 32178176 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.222948074s of 16.339372635s, submitted: 24
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:27.693122+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613f9162960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4ac00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4ac00 session 0x5613f9cb7a40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613fb494f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc101400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc101400 session 0x5613fc49da40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fafbf800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fafbf800 session 0x5613fb2cab40
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08b000/0x0/0x4ffc00000, data 0x110777c/0x11d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 31604736 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:28.693253+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 30179328 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:29.693394+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 25427968 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:30.693603+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9119000/0x0/0x4ffc00000, data 0x20787de/0x2143000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc02be00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 25485312 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:31.693883+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4ac00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4ac00 session 0x5613faef7860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1440180 data_alloc: 234881024 data_used: 15785984
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fafbf800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fafbf800 session 0x5613fc49c780
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 25468928 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:32.694052+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613f95590e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 25600000 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:33.694239+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc101400
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32fc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 25600000 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:34.694413+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129867776 unmapped: 24944640 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:35.694749+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132448256 unmapped: 22364160 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:36.694943+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90e7000/0x0/0x4ffc00000, data 0x20a8811/0x2175000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493907 data_alloc: 234881024 data_used: 23171072
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90e7000/0x0/0x4ffc00000, data 0x20a8811/0x2175000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 22331392 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:37.695198+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90e7000/0x0/0x4ffc00000, data 0x20a8811/0x2175000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 22331392 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:38.695473+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 22331392 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:39.695681+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 22331392 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:40.695818+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 22315008 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:41.696045+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493907 data_alloc: 234881024 data_used: 23171072
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:42.696237+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 22315008 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:43.696387+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 22315008 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90e7000/0x0/0x4ffc00000, data 0x20a8811/0x2175000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:44.696593+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 22315008 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90e7000/0x0/0x4ffc00000, data 0x20a8811/0x2175000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:45.696737+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 22315008 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.218862534s of 18.503250122s, submitted: 89
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:46.696862+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 19062784 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572493 data_alloc: 234881024 data_used: 23388160
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:47.697066+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 18415616 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:48.697251+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f86e4000/0x0/0x4ffc00000, data 0x2aab811/0x2b78000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:49.697517+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:50.697761+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f86e4000/0x0/0x4ffc00000, data 0x2aab811/0x2b78000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:51.697950+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1577791 data_alloc: 234881024 data_used: 23384064
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:52.698136+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f86e4000/0x0/0x4ffc00000, data 0x2aab811/0x2b78000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:53.698398+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:54.698694+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:55.699006+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:56.699152+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1577479 data_alloc: 234881024 data_used: 23388160
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:57.699333+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:58.700315+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f86c0000/0x0/0x4ffc00000, data 0x2acf811/0x2b9c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.937669754s of 13.194371223s, submitted: 97
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:59.701372+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 18341888 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc101400 session 0x5613fa2130e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32fc00 session 0x5613fc2aa000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:00.701516+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 19161088 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc6c2f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:01.701863+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 24215552 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395812 data_alloc: 234881024 data_used: 15728640
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:02.702099+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 24215552 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:03.702347+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 24215552 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50000 session 0x5613fba0e960
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50800 session 0x5613fa212f00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4ac00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:04.702557+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 24207360 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9446000/0x0/0x4ffc00000, data 0x193c77c/0x1a06000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [1,0,0,2])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4ac00 session 0x5613f9558d20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:05.702776+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:06.702960+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228937 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:07.703187+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:08.703482+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:09.703758+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:10.703978+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:11.704152+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228937 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:12.704419+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:13.704574+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:14.704831+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:15.705054+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:16.705400+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228937 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:17.705610+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:18.705839+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 07 10:18:56 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1278704407' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:19.706177+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:20.706425+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:21.706767+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228937 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:22.706997+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:23.707199+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:24.707436+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:25.707720+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4ac00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:26.707965+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.124614716s of 27.378499985s, submitted: 88
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4ac00 session 0x5613fac71860
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb3ad680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50000 session 0x5613fb3adc20
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50800 session 0x5613f9dd21e0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32fc00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32fc00 session 0x5613fc73c5a0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254370 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:27.708167+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:28.708338+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:29.708706+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1cd000/0x0/0x4ffc00000, data 0xbb57ce/0xc7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:30.708866+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:31.709060+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc73cf00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1cd000/0x0/0x4ffc00000, data 0xbb57ce/0xc7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4ac00
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50000
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1cd000/0x0/0x4ffc00000, data 0xbb57ce/0xc7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:32.709296+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254370 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:33.709486+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1cd000/0x0/0x4ffc00000, data 0xbb57ce/0xc7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125280256 unmapped: 29532160 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:34.709686+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:35.709876+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:36.710091+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:37.710374+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269266 data_alloc: 234881024 data_used: 9789440
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:38.710547+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1cd000/0x0/0x4ffc00000, data 0xbb57ce/0xc7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:39.710750+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:40.710999+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:41.711160+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:42.711325+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269266 data_alloc: 234881024 data_used: 9789440
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1cd000/0x0/0x4ffc00000, data 0xbb57ce/0xc7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:43.711538+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.777629852s of 17.851564407s, submitted: 26
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 28114944 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:44.711770+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128819200 unmapped: 25993216 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:45.712041+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b37000/0x0/0x4ffc00000, data 0x123d7ce/0x1307000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [0,0,1])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:46.712212+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:47.712385+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336436 data_alloc: 234881024 data_used: 10362880
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:48.712588+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:49.712864+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b01000/0x0/0x4ffc00000, data 0x12797ce/0x1343000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:50.713112+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:51.713338+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b01000/0x0/0x4ffc00000, data 0x12797ce/0x1343000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b01000/0x0/0x4ffc00000, data 0x12797ce/0x1343000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:52.713478+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330852 data_alloc: 234881024 data_used: 10366976
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x127c7ce/0x1346000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:53.713713+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:54.713895+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:55.714081+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:56.714300+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x127c7ce/0x1346000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:57.714541+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330852 data_alloc: 234881024 data_used: 10366976
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:58.714730+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:59.714964+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x127c7ce/0x1346000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:00.715196+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.237125397s of 16.531042099s, submitted: 113
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4ac00 session 0x5613fc73c3c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50000 session 0x5613fba0c3c0
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50800
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:01.715359+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50800 session 0x5613fc73d680
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:02.715549+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:03.715744+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:04.715938+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:05.716141+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:06.716409+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:07.716587+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:08.716816+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:09.717108+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:10.717291+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:11.717471+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:12.717677+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:13.717872+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:14.718011+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:15.718179+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:16.718445+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:17.718606+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:18.718860+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:19.719042+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:20.719221+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:21.719377+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:22.719700+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:23.719917+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:24.720082+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:25.720267+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:26.720495+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:27.720806+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:28.720976+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:29.721147+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:30.721364+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:31.721507+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:32.737722+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:33.737889+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:34.738089+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:35.738276+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:36.738519+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:37.738738+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:38.738901+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:39.739089+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:40.739249+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:41.739485+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:42.739688+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:43.739875+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:44.740063+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:45.740279+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:46.740472+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:47.740645+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:48.740815+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:49.741086+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:50.741239+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:51.741464+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:52.741672+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:53.741812+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:54.741989+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:55.742153+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:56.742292+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:57.742473+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 28418048 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:58.742657+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 28418048 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:59.742891+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 28418048 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:00.743090+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 28418048 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:01.743259+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 28418048 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:02.743495+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 28418048 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:03.743663+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:04.743802+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:05.744021+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:06.744263+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:07.744468+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:08.744689+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:09.744896+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:10.745067+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:11.745253+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:12.745484+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:13.745707+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:14.745885+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:15.746062+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:16.746235+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:17.746450+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:18.746688+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:19.746881+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126410752 unmapped: 28401664 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:20.747020+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126410752 unmapped: 28401664 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:21.747165+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126410752 unmapped: 28401664 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:22.747341+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126410752 unmapped: 28401664 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:18:56 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:18:56 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:23.747492+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: do_command 'config diff' '{prefix=config diff}'
Dec 07 10:18:56 compute-1 ceph-osd[77581]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 07 10:18:56 compute-1 ceph-osd[77581]: do_command 'config show' '{prefix=config show}'
Dec 07 10:18:56 compute-1 ceph-osd[77581]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126369792 unmapped: 28442624 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: do_command 'counter dump' '{prefix=counter dump}'
Dec 07 10:18:56 compute-1 ceph-osd[77581]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 07 10:18:56 compute-1 ceph-osd[77581]: do_command 'counter schema' '{prefix=counter schema}'
Dec 07 10:18:56 compute-1 ceph-osd[77581]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 07 10:18:56 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:24.747664+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126091264 unmapped: 28721152 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:18:56 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:25.747791+0000)
Dec 07 10:18:56 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126205952 unmapped: 28606464 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:18:56 compute-1 ceph-osd[77581]: do_command 'log dump' '{prefix=log dump}'
Dec 07 10:18:56 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 10:18:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 07 10:18:57 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3106265431' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:18:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:57.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.26066 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.16938 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.26084 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.16950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.26380 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.26114 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.16968 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3616412490' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3502013220' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.26392 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1278704407' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1149218437' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3213981585' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3106265431' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 07 10:18:57 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2302000382' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 07 10:18:57 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2231897056' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 07 10:18:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:18:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:18:58.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:58 compute-1 crontab[242573]: (root) LIST (root)
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.26135 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.16986 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.26407 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.17010 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.26150 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.26416 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1 op/s
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3345123056' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2279388824' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2302000382' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4180362073' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2231897056' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/76357406' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2263262449' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:18:58 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 07 10:18:58 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3414607553' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 07 10:18:59 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3994763478' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 07 10:18:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:18:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:18:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:18:59.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.17022 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.26177 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.26428 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.17049 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.26198 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.26446 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.17070 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.26216 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2306248426' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2285792475' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3414607553' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/824909141' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3994763478' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 07 10:18:59 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2724544432' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 07 10:18:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 07 10:18:59 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/228440692' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 07 10:19:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:00.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 07 10:19:00 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1271617483' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 07 10:19:00 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3507890175' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.26461 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.17079 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.17097 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.26485 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.17103 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.26237 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.26497 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.17118 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/358336022' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1212324181' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2724544432' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/228440692' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4085620205' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4040182988' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2052175755' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1271617483' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3507890175' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3050173386' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1996380930' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 07 10:19:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 07 10:19:00 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/755298937' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 07 10:19:01 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2051690252' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 07 10:19:01 compute-1 systemd[1]: Starting Hostname Service...
Dec 07 10:19:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 07 10:19:01 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1656568064' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 07 10:19:01 compute-1 systemd[1]: Started Hostname Service.
Dec 07 10:19:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:01.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 07 10:19:01 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3801897142' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 07 10:19:01 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4171268841' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.26515 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.26536 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3460278630' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/755298937' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2847317681' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3477372415' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/329368812' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/797649901' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/820384019' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2051690252' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/106739135' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1656568064' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4093201090' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4045264146' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3801897142' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/355420560' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 07 10:19:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 07 10:19:01 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3090897219' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 07 10:19:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 07 10:19:02 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/189311353' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 07 10:19:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:02.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 07 10:19:02 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/43913026' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 07 10:19:02 compute-1 podman[243165]: 2025-12-07 10:19:02.625648322 +0000 UTC m=+0.122808981 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 07 10:19:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:19:03 compute-1 ceph-mon[80077]: pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/4171268841' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3868375653' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/477203890' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/526849083' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3241635358' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3090897219' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2397512512' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/189311353' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/641506262' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/878584831' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3004704076' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1650766654' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2811210090' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/43913026' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/672448622' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 07 10:19:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/610642075' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 07 10:19:03 compute-1 sshd-session[243256]: Invalid user postgres from 104.248.193.130 port 41548
Dec 07 10:19:03 compute-1 sshd-session[243256]: Connection closed by invalid user postgres 104.248.193.130 port 41548 [preauth]
Dec 07 10:19:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:03.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:04 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 07 10:19:04 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1554185712' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.26378 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.17274 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.26387 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.17307 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/947260732' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3365241911' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.17298 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/4255241510' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/4255241510' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.26405 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.26420 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.17328 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.26659 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2154176457' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1993314288' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 07 10:19:04 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1554185712' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 07 10:19:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:04.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:04 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Dec 07 10:19:04 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2882762250' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 07 10:19:05 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2128077742' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: from='client.26435 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: from='client.17358 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: from='client.26677 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: from='client.26683 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: from='client.17379 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: from='client.17385 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1550008820' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: from='client.26701 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2882762250' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/455888881' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1210818251' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 07 10:19:05 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2128077742' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:19:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:05.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:05 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 07 10:19:05 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3143154562' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 07 10:19:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:06.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:06 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='client.17406 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='client.26480 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='client.26483 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='client.17430 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='client.26498 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='client.26734 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/25312416' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2985955122' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3143154562' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='client.26504 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='client.26516 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2935185436' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:19:06 compute-1 sudo[243749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:19:06 compute-1 sudo[243749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:19:06 compute-1 sudo[243749]: pam_unix(sudo:session): session closed for user root
Dec 07 10:19:06 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:19:06 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:19:06 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:19:06 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:19:06 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 07 10:19:06 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4213152370' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='client.26752 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='client.26770 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1827235291' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='client.26806 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3317557144' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/4213152370' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:19:07 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:19:07 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:19:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:07.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 07 10:19:07 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2428026845' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 07 10:19:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:19:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:08.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:08 compute-1 ceph-mon[80077]: from='client.17595 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:08 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:19:08 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:19:08 compute-1 ceph-mon[80077]: pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:08 compute-1 ceph-mon[80077]: from='client.26624 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:08 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:19:08 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:19:08 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/531597163' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 07 10:19:08 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1350549379' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 07 10:19:08 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2428026845' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 07 10:19:08 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1197806118' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 07 10:19:08 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Dec 07 10:19:08 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3422751038' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 07 10:19:08 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 07 10:19:08 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3343041514' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 07 10:19:09 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 07 10:19:09 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1549315097' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 07 10:19:09 compute-1 ceph-mon[80077]: from='client.26884 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:09 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3422751038' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 07 10:19:09 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3476229112' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 07 10:19:09 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4246822110' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 07 10:19:09 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3343041514' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 07 10:19:09 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/534413216' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 07 10:19:09 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2347541180' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 07 10:19:09 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1549315097' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 07 10:19:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:19:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:09.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:19:09 compute-1 podman[244140]: 2025-12-07 10:19:09.573444789 +0000 UTC m=+0.074894077 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:19:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 07 10:19:10 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2423799279' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 07 10:19:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:10.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:10 compute-1 ceph-mon[80077]: from='client.17652 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:10 compute-1 ceph-mon[80077]: pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:19:10 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2345798662' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 07 10:19:10 compute-1 ceph-mon[80077]: from='client.26690 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:10 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4052365325' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 07 10:19:10 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/618094463' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 07 10:19:10 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2423799279' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 07 10:19:10 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1962016154' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 07 10:19:10 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 07 10:19:10 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2183455883' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 07 10:19:11 compute-1 ceph-mon[80077]: from='client.26917 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:11 compute-1 ceph-mon[80077]: from='client.17685 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:11 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2183455883' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 07 10:19:11 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4287580869' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 07 10:19:11 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/305546885' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 07 10:19:11 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/454064808' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 07 10:19:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:11.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:12.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:12 compute-1 ovs-appctl[244951]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 07 10:19:12 compute-1 ovs-appctl[244958]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 07 10:19:12 compute-1 ovs-appctl[244981]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 07 10:19:12 compute-1 ceph-mon[80077]: from='client.26726 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:12 compute-1 ceph-mon[80077]: from='client.17697 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:12 compute-1 ceph-mon[80077]: pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:12 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/765091854' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 07 10:19:12 compute-1 ceph-mon[80077]: from='client.17712 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:12 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/403766513' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 07 10:19:12 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3086070880' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 07 10:19:12 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/803495527' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 07 10:19:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Dec 07 10:19:12 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1559393301' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 07 10:19:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Dec 07 10:19:12 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1068130616' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 07 10:19:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:19:13 compute-1 ceph-mon[80077]: from='client.26747 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:13 compute-1 ceph-mon[80077]: from='client.26947 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:13 compute-1 ceph-mon[80077]: from='client.26756 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:19:13 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1559393301' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 07 10:19:13 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1068130616' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 07 10:19:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:13.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Dec 07 10:19:14 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2869689675' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 07 10:19:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:14.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:14 compute-1 ceph-mon[80077]: from='client.26962 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:14 compute-1 ceph-mon[80077]: from='client.17745 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:14 compute-1 ceph-mon[80077]: from='client.26968 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:14 compute-1 ceph-mon[80077]: from='client.17757 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:14 compute-1 ceph-mon[80077]: from='client.26774 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:14 compute-1 ceph-mon[80077]: pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1658715272' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 07 10:19:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3964626857' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 07 10:19:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3639348985' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 07 10:19:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/608021361' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 07 10:19:14 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2869689675' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 07 10:19:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Dec 07 10:19:14 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2401811265' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 07 10:19:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:15.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:15 compute-1 ceph-mon[80077]: from='client.26786 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:15 compute-1 ceph-mon[80077]: from='client.17784 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:15 compute-1 ceph-mon[80077]: from='client.17787 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2401811265' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 07 10:19:15 compute-1 ceph-mon[80077]: from='client.17793 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1499075912' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 07 10:19:15 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3848430948' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 07 10:19:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:16.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Dec 07 10:19:16 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3307127373' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 07 10:19:16 compute-1 ceph-mon[80077]: from='client.26995 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:16 compute-1 ceph-mon[80077]: from='client.26813 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:16 compute-1 ceph-mon[80077]: pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:19:16 compute-1 ceph-mon[80077]: from='client.26819 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/694475461' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 07 10:19:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3585056831' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 07 10:19:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/20433843' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3740387197' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 07 10:19:16 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3307127373' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 07 10:19:16 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Dec 07 10:19:16 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3332862357' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:17.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:17 compute-1 podman[246456]: 2025-12-07 10:19:17.598953314 +0000 UTC m=+0.082201297 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 07 10:19:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Dec 07 10:19:17 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4063284453' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:19:17 compute-1 ceph-mon[80077]: from='client.27022 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:17 compute-1 ceph-mon[80077]: from='client.27028 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:19:17 compute-1 ceph-mon[80077]: from='client.17835 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/418666022' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 07 10:19:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1693107361' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:19:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3332862357' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2417138203' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 07 10:19:17 compute-1 ceph-mon[80077]: from='client.26849 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2979849283' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 07 10:19:17 compute-1 ceph-mon[80077]: pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1402306304' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:17 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1691614735' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:19:18 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Dec 07 10:19:18 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3073864784' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 07 10:19:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:18.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:18 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Dec 07 10:19:18 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4212714338' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/4063284453' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:19:18 compute-1 ceph-mon[80077]: from='client.27067 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3876763443' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 07 10:19:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3073864784' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 07 10:19:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2627435991' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:19:18 compute-1 ceph-mon[80077]: from='client.17889 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:18 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/4212714338' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Dec 07 10:19:19 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/211618568' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 07 10:19:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:19.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.677968) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102759678013, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2692, "num_deletes": 251, "total_data_size": 6395605, "memory_usage": 6480048, "flush_reason": "Manual Compaction"}
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Dec 07 10:19:19 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/846612334' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 07 10:19:19 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2033758758' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 07 10:19:19 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/211618568' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 07 10:19:19 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2770818407' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:19 compute-1 ceph-mon[80077]: pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:19:19 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4236617164' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:19 compute-1 ceph-mon[80077]: from='client.26888 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:19 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1528204455' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102759707145, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 4147469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31608, "largest_seqno": 34295, "table_properties": {"data_size": 4136048, "index_size": 7083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 28406, "raw_average_key_size": 21, "raw_value_size": 4111614, "raw_average_value_size": 3182, "num_data_blocks": 303, "num_entries": 1292, "num_filter_entries": 1292, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102561, "oldest_key_time": 1765102561, "file_creation_time": 1765102759, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 29252 microseconds, and 10356 cpu microseconds.
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.707213) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 4147469 bytes OK
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.707239) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.709226) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.709239) EVENT_LOG_v1 {"time_micros": 1765102759709235, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.709258) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 6383014, prev total WAL file size 6383014, number of live WAL files 2.
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.710825) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(4050KB)], [60(12MB)]
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102759710857, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 16926464, "oldest_snapshot_seqno": -1}
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6683 keys, 14767885 bytes, temperature: kUnknown
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102759804019, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 14767885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14723734, "index_size": 26311, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 171599, "raw_average_key_size": 25, "raw_value_size": 14604202, "raw_average_value_size": 2185, "num_data_blocks": 1057, "num_entries": 6683, "num_filter_entries": 6683, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765102759, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.804404) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14767885 bytes
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.808105) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.5 rd, 158.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.2 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 7199, records dropped: 516 output_compression: NoCompression
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.808145) EVENT_LOG_v1 {"time_micros": 1765102759808127, "job": 36, "event": "compaction_finished", "compaction_time_micros": 93282, "compaction_time_cpu_micros": 28667, "output_level": 6, "num_output_files": 1, "total_output_size": 14767885, "num_input_records": 7199, "num_output_records": 6683, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102759810001, "job": 36, "event": "table_file_deletion", "file_number": 62}
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102759814544, "job": 36, "event": "table_file_deletion", "file_number": 60}
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.710768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.814599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.814605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.814608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.814612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:19:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:19:19.814654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:19:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Dec 07 10:19:19 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2239579914' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 07 10:19:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:20.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:20 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Dec 07 10:19:20 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1364548918' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:20 compute-1 ceph-mon[80077]: from='client.17913 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:20 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2239579914' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 07 10:19:20 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4059141403' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 07 10:19:20 compute-1 ceph-mon[80077]: from='client.27109 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:20 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1364548918' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:20 compute-1 ceph-mon[80077]: from='client.17937 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:20 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3342900046' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 07 10:19:21 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Dec 07 10:19:21 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2837594218' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 07 10:19:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:21.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:21 compute-1 ceph-mon[80077]: from='client.26912 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:21 compute-1 ceph-mon[80077]: from='client.17952 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:21 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2970019277' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:21 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2837594218' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 07 10:19:21 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/645209801' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:21 compute-1 ceph-mon[80077]: pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:21 compute-1 ceph-mon[80077]: from='client.27130 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:21 compute-1 ceph-mon[80077]: from='client.26924 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:21 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1341636885' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 07 10:19:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:22.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:22 compute-1 nova_compute[230488]: 2025-12-07 10:19:22.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:19:22 compute-1 nova_compute[230488]: 2025-12-07 10:19:22.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:19:22 compute-1 nova_compute[230488]: 2025-12-07 10:19:22.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:19:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Dec 07 10:19:22 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1618359681' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:22 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1772933073' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 07 10:19:22 compute-1 ceph-mon[80077]: from='client.26930 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:22 compute-1 ceph-mon[80077]: from='client.17991 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:22 compute-1 ceph-mon[80077]: from='client.27148 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:22 compute-1 ceph-mon[80077]: from='client.17997 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:22 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1618359681' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:22 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1507606085' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:19:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Dec 07 10:19:22 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1919237944' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 07 10:19:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:19:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:23.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:23 compute-1 ceph-mon[80077]: from='client.27154 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1919237944' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 07 10:19:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2361377205' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 07 10:19:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1147204193' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 07 10:19:23 compute-1 ceph-mon[80077]: from='client.18027 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:23 compute-1 ceph-mon[80077]: pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:23 compute-1 ceph-mon[80077]: from='client.18033 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:23 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1432731850' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 07 10:19:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Dec 07 10:19:24 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4022018715' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:19:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:24.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:24 compute-1 nova_compute[230488]: 2025-12-07 10:19:24.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:19:24 compute-1 nova_compute[230488]: 2025-12-07 10:19:24.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:19:24 compute-1 nova_compute[230488]: 2025-12-07 10:19:24.307 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:19:24 compute-1 nova_compute[230488]: 2025-12-07 10:19:24.307 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:19:24 compute-1 nova_compute[230488]: 2025-12-07 10:19:24.308 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:19:24 compute-1 nova_compute[230488]: 2025-12-07 10:19:24.308 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:19:24 compute-1 nova_compute[230488]: 2025-12-07 10:19:24.308 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:19:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Dec 07 10:19:24 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/854499368' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 07 10:19:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:19:24 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2925776283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:19:24 compute-1 nova_compute[230488]: 2025-12-07 10:19:24.800 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:19:24 compute-1 virtqemud[229835]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 07 10:19:25 compute-1 nova_compute[230488]: 2025-12-07 10:19:25.016 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:19:25 compute-1 nova_compute[230488]: 2025-12-07 10:19:25.017 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5005MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:19:25 compute-1 nova_compute[230488]: 2025-12-07 10:19:25.017 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:19:25 compute-1 nova_compute[230488]: 2025-12-07 10:19:25.018 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:19:25 compute-1 nova_compute[230488]: 2025-12-07 10:19:25.102 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:19:25 compute-1 nova_compute[230488]: 2025-12-07 10:19:25.102 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:19:25 compute-1 nova_compute[230488]: 2025-12-07 10:19:25.131 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:19:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:25.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:25 compute-1 systemd[1]: Starting Time & Date Service...
Dec 07 10:19:25 compute-1 ceph-mon[80077]: from='client.26963 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:25 compute-1 ceph-mon[80077]: from='client.27178 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:25 compute-1 ceph-mon[80077]: from='client.18051 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/4022018715' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:19:25 compute-1 ceph-mon[80077]: from='client.27187 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1668245142' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 07 10:19:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/854499368' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 07 10:19:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/365691918' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:19:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2925776283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:19:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/835152910' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 07 10:19:25 compute-1 systemd[1]: Started Time & Date Service.
Dec 07 10:19:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:19:25 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/818098408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:19:25 compute-1 nova_compute[230488]: 2025-12-07 10:19:25.672 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:19:25 compute-1 nova_compute[230488]: 2025-12-07 10:19:25.728 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:19:25 compute-1 nova_compute[230488]: 2025-12-07 10:19:25.765 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:19:25 compute-1 nova_compute[230488]: 2025-12-07 10:19:25.767 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:19:25 compute-1 nova_compute[230488]: 2025-12-07 10:19:25.768 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:19:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:26.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:26 compute-1 sudo[247442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:19:26 compute-1 sudo[247442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:19:26 compute-1 sudo[247442]: pam_unix(sudo:session): session closed for user root
Dec 07 10:19:26 compute-1 ceph-mon[80077]: from='client.27005 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:26 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2556671210' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 07 10:19:26 compute-1 ceph-mon[80077]: pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:19:26 compute-1 ceph-mon[80077]: from='client.27211 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:26 compute-1 ceph-mon[80077]: from='client.18081 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:26 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/818098408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:19:26 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1679196041' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 07 10:19:26 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Dec 07 10:19:26 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/271785984' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 07 10:19:26 compute-1 nova_compute[230488]: 2025-12-07 10:19:26.768 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:19:27 compute-1 nova_compute[230488]: 2025-12-07 10:19:27.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:19:27 compute-1 nova_compute[230488]: 2025-12-07 10:19:27.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:19:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:27.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:27 compute-1 ceph-mon[80077]: from='client.27223 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:19:27 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4217755618' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 07 10:19:27 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/271785984' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 07 10:19:27 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3317990797' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 07 10:19:27 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3471426469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:19:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:19:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:19:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:28.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:28 compute-1 nova_compute[230488]: 2025-12-07 10:19:28.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:19:28 compute-1 nova_compute[230488]: 2025-12-07 10:19:28.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:19:28 compute-1 nova_compute[230488]: 2025-12-07 10:19:28.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:19:28 compute-1 nova_compute[230488]: 2025-12-07 10:19:28.365 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:19:28 compute-1 ceph-mon[80077]: pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:28 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1325873371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:19:28 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/884993203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:19:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:29.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:29 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/249192277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:19:29 compute-1 ceph-mon[80077]: pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:19:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:19:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:30.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:19:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:31.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:32.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:32 compute-1 ceph-mon[80077]: pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:19:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:33.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:33 compute-1 podman[247547]: 2025-12-07 10:19:33.720546203 +0000 UTC m=+0.201892633 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:19:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:34.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:34 compute-1 nova_compute[230488]: 2025-12-07 10:19:34.361 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:19:34 compute-1 ceph-mon[80077]: pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:35.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:35 compute-1 ceph-mon[80077]: pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:19:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:36.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:37.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:19:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:38.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:38 compute-1 ceph-mon[80077]: pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:19:38.656 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:19:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:19:38.657 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:19:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:19:38.657 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:19:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:39.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:40.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:40 compute-1 ceph-mon[80077]: pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:19:40 compute-1 podman[247578]: 2025-12-07 10:19:40.605472461 +0000 UTC m=+0.098994455 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 07 10:19:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:41.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:42.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:42 compute-1 ceph-mon[80077]: pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:19:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:43.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:19:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:44.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:44 compute-1 ceph-mon[80077]: pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:45 compute-1 sshd-session[247601]: Invalid user postgres from 104.248.193.130 port 41660
Dec 07 10:19:45 compute-1 sshd-session[247601]: Connection closed by invalid user postgres 104.248.193.130 port 41660 [preauth]
Dec 07 10:19:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:45.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:45 compute-1 ceph-mon[80077]: pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:19:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:46.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:46 compute-1 sudo[247604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:19:46 compute-1 sudo[247604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:19:46 compute-1 sudo[247604]: pam_unix(sudo:session): session closed for user root
Dec 07 10:19:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:47.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:19:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:19:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:48.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:19:48 compute-1 ceph-mon[80077]: pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:48 compute-1 podman[247630]: 2025-12-07 10:19:48.5866828 +0000 UTC m=+0.077779907 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec 07 10:19:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:49.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:50.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:50 compute-1 ceph-mon[80077]: pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:19:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:51.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:52.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:52 compute-1 ceph-mon[80077]: pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:19:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:53.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:54.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:54 compute-1 sudo[247653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:19:54 compute-1 sudo[247653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:19:54 compute-1 sudo[247653]: pam_unix(sudo:session): session closed for user root
Dec 07 10:19:54 compute-1 ceph-mon[80077]: pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:19:54 compute-1 sudo[247678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:19:54 compute-1 sudo[247678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:19:54 compute-1 sudo[247678]: pam_unix(sudo:session): session closed for user root
Dec 07 10:19:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:19:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:55.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:19:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:19:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:19:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:19:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:19:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:19:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:19:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:19:55 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 07 10:19:55 compute-1 sshd-session[247736]: Connection closed by 161.35.84.99 port 45452
Dec 07 10:19:55 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 07 10:19:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:56.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:56 compute-1 ceph-mon[80077]: pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:19:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:57.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:19:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:19:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:19:58.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:58 compute-1 ceph-mon[80077]: pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:19:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:19:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:19:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:19:59.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:19:59 compute-1 ceph-mon[80077]: pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:20:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:00.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:00 compute-1 ceph-mon[80077]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Dec 07 10:20:00 compute-1 ceph-mon[80077]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Dec 07 10:20:00 compute-1 ceph-mon[80077]:     daemon nfs.cephfs.0.0.compute-1.jddrlu on compute-1 is in error state
Dec 07 10:20:01 compute-1 sudo[247744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:20:01 compute-1 sudo[247744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:20:01 compute-1 sudo[247744]: pam_unix(sudo:session): session closed for user root
Dec 07 10:20:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:01.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:01 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:20:01 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:20:01 compute-1 ceph-mon[80077]: pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:20:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:02.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:20:02 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2500032481' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:20:02 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2500032481' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:20:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:03.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:03 compute-1 ceph-mon[80077]: pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:20:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:04.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:04 compute-1 podman[247771]: 2025-12-07 10:20:04.627690697 +0000 UTC m=+0.115684129 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 07 10:20:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:05.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:06 compute-1 ceph-mon[80077]: pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:20:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:20:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:06.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:20:06 compute-1 sudo[247798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:20:06 compute-1 sudo[247798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:20:06 compute-1 sudo[247798]: pam_unix(sudo:session): session closed for user root
Dec 07 10:20:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:07.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:20:08 compute-1 ceph-mon[80077]: pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:08.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:09.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:10.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:10 compute-1 ceph-mon[80077]: pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:20:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:20:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:11.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:20:11 compute-1 podman[247825]: 2025-12-07 10:20:11.572524484 +0000 UTC m=+0.075926106 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 07 10:20:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:12.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:12 compute-1 ceph-mon[80077]: pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:20:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:20:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:13.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:14.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:14 compute-1 ceph-mon[80077]: pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:15.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:16.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:16 compute-1 ceph-mon[80077]: pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:20:17 compute-1 sudo[240355]: pam_unix(sudo:session): session closed for user root
Dec 07 10:20:17 compute-1 sshd-session[240354]: Received disconnect from 192.168.122.10 port 42720:11: disconnected by user
Dec 07 10:20:17 compute-1 sshd-session[240354]: Disconnected from user zuul 192.168.122.10 port 42720
Dec 07 10:20:17 compute-1 sshd-session[240351]: pam_unix(sshd:session): session closed for user zuul
Dec 07 10:20:17 compute-1 systemd[1]: session-55.scope: Deactivated successfully.
Dec 07 10:20:17 compute-1 systemd[1]: session-55.scope: Consumed 2min 58.739s CPU time, 781.1M memory peak, read 338.3M from disk, written 181.9M to disk.
Dec 07 10:20:17 compute-1 systemd-logind[796]: Session 55 logged out. Waiting for processes to exit.
Dec 07 10:20:17 compute-1 systemd-logind[796]: Removed session 55.
Dec 07 10:20:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:17.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:17 compute-1 sshd-session[247850]: Accepted publickey for zuul from 192.168.122.10 port 35324 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 10:20:17 compute-1 systemd-logind[796]: New session 56 of user zuul.
Dec 07 10:20:17 compute-1 systemd[1]: Started Session 56 of User zuul.
Dec 07 10:20:17 compute-1 sshd-session[247850]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 10:20:17 compute-1 sudo[247854]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-1-2025-12-07-wggvhwu.tar.xz
Dec 07 10:20:17 compute-1 sudo[247854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:20:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:20:17 compute-1 sudo[247854]: pam_unix(sudo:session): session closed for user root
Dec 07 10:20:17 compute-1 sshd-session[247853]: Received disconnect from 192.168.122.10 port 35324:11: disconnected by user
Dec 07 10:20:17 compute-1 sshd-session[247853]: Disconnected from user zuul 192.168.122.10 port 35324
Dec 07 10:20:17 compute-1 sshd-session[247850]: pam_unix(sshd:session): session closed for user zuul
Dec 07 10:20:17 compute-1 systemd[1]: session-56.scope: Deactivated successfully.
Dec 07 10:20:17 compute-1 systemd-logind[796]: Session 56 logged out. Waiting for processes to exit.
Dec 07 10:20:17 compute-1 systemd-logind[796]: Removed session 56.
Dec 07 10:20:18 compute-1 sshd-session[247879]: Accepted publickey for zuul from 192.168.122.10 port 35340 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 10:20:18 compute-1 systemd-logind[796]: New session 57 of user zuul.
Dec 07 10:20:18 compute-1 systemd[1]: Started Session 57 of User zuul.
Dec 07 10:20:18 compute-1 sshd-session[247879]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 10:20:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:18.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:18 compute-1 sudo[247883]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Dec 07 10:20:18 compute-1 sudo[247883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:20:18 compute-1 sudo[247883]: pam_unix(sudo:session): session closed for user root
Dec 07 10:20:18 compute-1 sshd-session[247882]: Received disconnect from 192.168.122.10 port 35340:11: disconnected by user
Dec 07 10:20:18 compute-1 sshd-session[247882]: Disconnected from user zuul 192.168.122.10 port 35340
Dec 07 10:20:18 compute-1 sshd-session[247879]: pam_unix(sshd:session): session closed for user zuul
Dec 07 10:20:18 compute-1 systemd[1]: session-57.scope: Deactivated successfully.
Dec 07 10:20:18 compute-1 systemd-logind[796]: Session 57 logged out. Waiting for processes to exit.
Dec 07 10:20:18 compute-1 systemd-logind[796]: Removed session 57.
Dec 07 10:20:18 compute-1 ceph-mon[80077]: pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:19.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:19 compute-1 podman[247909]: 2025-12-07 10:20:19.600679631 +0000 UTC m=+0.086113263 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 07 10:20:19 compute-1 ceph-mon[80077]: pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:20:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:20.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:21.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:22 compute-1 ceph-mon[80077]: pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:22.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:22 compute-1 nova_compute[230488]: 2025-12-07 10:20:22.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:20:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:20:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:23.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:24 compute-1 ceph-mon[80077]: pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:24 compute-1 nova_compute[230488]: 2025-12-07 10:20:24.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:20:24 compute-1 nova_compute[230488]: 2025-12-07 10:20:24.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:20:24 compute-1 nova_compute[230488]: 2025-12-07 10:20:24.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:20:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:24.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:24 compute-1 nova_compute[230488]: 2025-12-07 10:20:24.299 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:20:24 compute-1 nova_compute[230488]: 2025-12-07 10:20:24.299 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:20:24 compute-1 nova_compute[230488]: 2025-12-07 10:20:24.299 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:20:24 compute-1 nova_compute[230488]: 2025-12-07 10:20:24.300 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:20:24 compute-1 nova_compute[230488]: 2025-12-07 10:20:24.300 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:20:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:20:24 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/432277966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:20:24 compute-1 nova_compute[230488]: 2025-12-07 10:20:24.777 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:20:25 compute-1 nova_compute[230488]: 2025-12-07 10:20:25.034 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:20:25 compute-1 nova_compute[230488]: 2025-12-07 10:20:25.036 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5173MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:20:25 compute-1 nova_compute[230488]: 2025-12-07 10:20:25.036 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:20:25 compute-1 nova_compute[230488]: 2025-12-07 10:20:25.036 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:20:25 compute-1 nova_compute[230488]: 2025-12-07 10:20:25.142 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:20:25 compute-1 nova_compute[230488]: 2025-12-07 10:20:25.143 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:20:25 compute-1 nova_compute[230488]: 2025-12-07 10:20:25.188 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:20:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/432277966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:20:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:25.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:25 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:20:25 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1053462805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:20:25 compute-1 nova_compute[230488]: 2025-12-07 10:20:25.695 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:20:25 compute-1 nova_compute[230488]: 2025-12-07 10:20:25.704 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:20:25 compute-1 nova_compute[230488]: 2025-12-07 10:20:25.725 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:20:25 compute-1 nova_compute[230488]: 2025-12-07 10:20:25.728 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:20:25 compute-1 nova_compute[230488]: 2025-12-07 10:20:25.729 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:20:26 compute-1 ceph-mon[80077]: pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:20:26 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1053462805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:20:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:26.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:26 compute-1 sudo[247976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:20:26 compute-1 sudo[247976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:20:26 compute-1 sudo[247976]: pam_unix(sudo:session): session closed for user root
Dec 07 10:20:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:27.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:27 compute-1 nova_compute[230488]: 2025-12-07 10:20:27.729 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:20:27 compute-1 nova_compute[230488]: 2025-12-07 10:20:27.730 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:20:27 compute-1 nova_compute[230488]: 2025-12-07 10:20:27.730 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:20:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:20:28 compute-1 ceph-mon[80077]: pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:20:28 compute-1 nova_compute[230488]: 2025-12-07 10:20:28.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:20:28 compute-1 nova_compute[230488]: 2025-12-07 10:20:28.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:20:28 compute-1 nova_compute[230488]: 2025-12-07 10:20:28.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:20:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:28.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:28 compute-1 nova_compute[230488]: 2025-12-07 10:20:28.289 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:20:29 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3687243534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:20:29 compute-1 nova_compute[230488]: 2025-12-07 10:20:29.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:20:29 compute-1 sshd-session[248002]: Invalid user postgres from 104.248.193.130 port 46678
Dec 07 10:20:29 compute-1 sshd-session[248002]: Connection closed by invalid user postgres 104.248.193.130 port 46678 [preauth]
Dec 07 10:20:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:29.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:30 compute-1 ceph-mon[80077]: pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:20:30 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3067814856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:20:30 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2111108877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:20:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:30.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:31 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1940195417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:20:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:31.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:32.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:32 compute-1 ceph-mon[80077]: pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:20:33 compute-1 nova_compute[230488]: 2025-12-07 10:20:33.265 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:20:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:33.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:34.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:34 compute-1 ceph-mon[80077]: pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:35 compute-1 nova_compute[230488]: 2025-12-07 10:20:35.287 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:20:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:35.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:35 compute-1 podman[248007]: 2025-12-07 10:20:35.633772381 +0000 UTC m=+0.126554653 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 07 10:20:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:36.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:36 compute-1 ceph-mon[80077]: pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:20:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:37.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:20:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:38.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:38 compute-1 ceph-mon[80077]: pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 07 10:20:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:20:38.656 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:20:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:20:38.658 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:20:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:20:38.658 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:20:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:20:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:39.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:20:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:40.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:40 compute-1 ceph-mon[80077]: pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:20:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:41.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:42.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:42 compute-1 ceph-mon[80077]: pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1022 B/s rd, 0 op/s
Dec 07 10:20:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:20:42 compute-1 podman[248038]: 2025-12-07 10:20:42.548115809 +0000 UTC m=+0.055731736 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 07 10:20:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:20:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:43.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 10:20:44 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 3974 syncs, 3.44 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2601 writes, 8027 keys, 2601 commit groups, 1.0 writes per commit group, ingest: 7.55 MB, 0.01 MB/s
                                           Interval WAL: 2601 writes, 1129 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 07 10:20:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:44.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:44 compute-1 ceph-mon[80077]: pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:45.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:46.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:46 compute-1 ceph-mon[80077]: pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:20:46 compute-1 sudo[248061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:20:46 compute-1 sudo[248061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:20:46 compute-1 sudo[248061]: pam_unix(sudo:session): session closed for user root
Dec 07 10:20:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:47.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:20:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:48.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:48 compute-1 ceph-mon[80077]: pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:49.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:49 compute-1 ceph-mon[80077]: pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:20:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:50.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:50 compute-1 podman[248088]: 2025-12-07 10:20:50.589745694 +0000 UTC m=+0.085888338 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 07 10:20:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:51.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:52 compute-1 ceph-mon[80077]: pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:52.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:20:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:53.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:54 compute-1 ceph-mon[80077]: pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:20:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:54.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:20:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:55.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:56 compute-1 ceph-mon[80077]: pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:20:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:56.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:57.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:20:58 compute-1 ceph-mon[80077]: pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:20:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:20:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:20:58.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:20:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:20:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:20:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:20:59.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:00 compute-1 ceph-mon[80077]: pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:21:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:00.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:01 compute-1 sudo[248112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:21:01 compute-1 sudo[248112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:21:01 compute-1 sudo[248112]: pam_unix(sudo:session): session closed for user root
Dec 07 10:21:01 compute-1 sudo[248137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 07 10:21:01 compute-1 sudo[248137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:21:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:01.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:01 compute-1 sudo[248137]: pam_unix(sudo:session): session closed for user root
Dec 07 10:21:01 compute-1 sudo[248182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:21:01 compute-1 sudo[248182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:21:01 compute-1 sudo[248182]: pam_unix(sudo:session): session closed for user root
Dec 07 10:21:01 compute-1 sudo[248207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:21:01 compute-1 sudo[248207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:21:02 compute-1 ceph-mon[80077]: pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:02 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:21:02 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:21:02 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:21:02 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:21:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:02.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:02 compute-1 sudo[248207]: pam_unix(sudo:session): session closed for user root
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:02.900249) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102862900313, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1353, "num_deletes": 253, "total_data_size": 3108493, "memory_usage": 3139864, "flush_reason": "Manual Compaction"}
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102862917049, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 2025812, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34300, "largest_seqno": 35648, "table_properties": {"data_size": 2019865, "index_size": 3214, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12414, "raw_average_key_size": 18, "raw_value_size": 2007727, "raw_average_value_size": 3060, "num_data_blocks": 140, "num_entries": 656, "num_filter_entries": 656, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102760, "oldest_key_time": 1765102760, "file_creation_time": 1765102862, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 16865 microseconds, and 9516 cpu microseconds.
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:02.917117) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 2025812 bytes OK
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:02.917150) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:02.918727) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:02.918747) EVENT_LOG_v1 {"time_micros": 1765102862918740, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:02.918772) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3102045, prev total WAL file size 3102045, number of live WAL files 2.
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:02.919712) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353034' seq:0, type:0; will stop at (end)
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(1978KB)], [63(14MB)]
Dec 07 10:21:02 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102862919747, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 16793697, "oldest_snapshot_seqno": -1}
Dec 07 10:21:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6817 keys, 15544836 bytes, temperature: kUnknown
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102863005027, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 15544836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15498885, "index_size": 27791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 176348, "raw_average_key_size": 25, "raw_value_size": 15375857, "raw_average_value_size": 2255, "num_data_blocks": 1106, "num_entries": 6817, "num_filter_entries": 6817, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765102862, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:03.005419) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 15544836 bytes
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:03.007177) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.7 rd, 182.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 14.1 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(16.0) write-amplify(7.7) OK, records in: 7339, records dropped: 522 output_compression: NoCompression
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:03.007212) EVENT_LOG_v1 {"time_micros": 1765102863007195, "job": 38, "event": "compaction_finished", "compaction_time_micros": 85376, "compaction_time_cpu_micros": 44148, "output_level": 6, "num_output_files": 1, "total_output_size": 15544836, "num_input_records": 7339, "num_output_records": 6817, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102863008044, "job": 38, "event": "table_file_deletion", "file_number": 65}
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102863013198, "job": 38, "event": "table_file_deletion", "file_number": 63}
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:02.919663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:03.013249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:03.013257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:03.013260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:03.013263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:21:03 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:21:03.013266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:21:03 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:21:03 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:21:03 compute-1 ceph-mon[80077]: pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:21:03 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:21:03 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:21:03 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:21:03 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:21:03 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:21:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/3477822844' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:21:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/3477822844' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:21:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:03.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:04.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:04 compute-1 ceph-mon[80077]: pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 07 10:21:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:05.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:06.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:06 compute-1 ceph-mon[80077]: pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:21:06 compute-1 podman[248268]: 2025-12-07 10:21:06.675677882 +0000 UTC m=+0.162563943 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 07 10:21:06 compute-1 sudo[248297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:21:06 compute-1 sudo[248297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:21:06 compute-1 sudo[248297]: pam_unix(sudo:session): session closed for user root
Dec 07 10:21:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:07.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:07 compute-1 sudo[248322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:21:07 compute-1 sudo[248322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:21:07 compute-1 sudo[248322]: pam_unix(sudo:session): session closed for user root
Dec 07 10:21:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:21:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:08.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:08 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:21:08 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:21:08 compute-1 ceph-mon[80077]: pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 07 10:21:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:10.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:10 compute-1 ceph-mon[80077]: pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:21:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:11.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:12.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:21:12 compute-1 sshd-session[248350]: Invalid user postgres from 104.248.193.130 port 45450
Dec 07 10:21:12 compute-1 sshd-session[248350]: Connection closed by invalid user postgres 104.248.193.130 port 45450 [preauth]
Dec 07 10:21:12 compute-1 podman[248352]: 2025-12-07 10:21:12.89118238 +0000 UTC m=+0.086514644 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:21:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:21:13 compute-1 ceph-mon[80077]: pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:21:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:13.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 10:21:13 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6882 writes, 36K keys, 6882 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s
                                           Cumulative WAL: 6882 writes, 6882 syncs, 1.00 writes per sync, written: 0.09 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1614 writes, 8440 keys, 1614 commit groups, 1.0 writes per commit group, ingest: 17.83 MB, 0.03 MB/s
                                           Interval WAL: 1614 writes, 1614 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     88.8      0.58              0.18        19    0.031       0      0       0.0       0.0
                                             L6      1/0   14.82 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.6    106.8     92.5      2.57              0.72        18    0.143    102K   9809       0.0       0.0
                                            Sum      1/0   14.82 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.6     87.1     91.8      3.15              0.90        37    0.085    102K   9809       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.3    128.2    132.0      0.65              0.25        10    0.065     34K   3102       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    106.8     92.5      2.57              0.72        18    0.143    102K   9809       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     89.0      0.58              0.18        18    0.032       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.050, interval 0.013
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.28 GB write, 0.12 MB/s write, 0.27 GB read, 0.11 MB/s read, 3.2 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563169dd350#2 capacity: 304.00 MB usage: 24.07 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000225 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1481,23.27 MB,7.65563%) FilterBlock(37,301.11 KB,0.0967277%) IndexBlock(37,515.45 KB,0.165583%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 07 10:21:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:14.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:14 compute-1 ceph-mon[80077]: pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:21:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:15.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:16.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:16 compute-1 ceph-mon[80077]: pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:17.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:21:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:18.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:18 compute-1 ceph-mon[80077]: pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:21:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:19.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:20.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:20 compute-1 ceph-mon[80077]: pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:21 compute-1 podman[248378]: 2025-12-07 10:21:21.600302012 +0000 UTC m=+0.090230766 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:21:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:21.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:22.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:22 compute-1 ceph-mon[80077]: pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:21:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:23.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:24 compute-1 nova_compute[230488]: 2025-12-07 10:21:24.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:21:24 compute-1 nova_compute[230488]: 2025-12-07 10:21:24.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:21:24 compute-1 nova_compute[230488]: 2025-12-07 10:21:24.298 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:21:24 compute-1 nova_compute[230488]: 2025-12-07 10:21:24.299 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:21:24 compute-1 nova_compute[230488]: 2025-12-07 10:21:24.299 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:21:24 compute-1 nova_compute[230488]: 2025-12-07 10:21:24.299 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:21:24 compute-1 nova_compute[230488]: 2025-12-07 10:21:24.300 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:21:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:24.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:24 compute-1 ceph-mon[80077]: pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:21:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:21:24 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/337463090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:21:24 compute-1 nova_compute[230488]: 2025-12-07 10:21:24.784 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:21:24 compute-1 nova_compute[230488]: 2025-12-07 10:21:24.972 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:21:24 compute-1 nova_compute[230488]: 2025-12-07 10:21:24.973 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5182MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:21:24 compute-1 nova_compute[230488]: 2025-12-07 10:21:24.974 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:21:24 compute-1 nova_compute[230488]: 2025-12-07 10:21:24.975 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:21:25 compute-1 nova_compute[230488]: 2025-12-07 10:21:25.058 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:21:25 compute-1 nova_compute[230488]: 2025-12-07 10:21:25.059 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:21:25 compute-1 nova_compute[230488]: 2025-12-07 10:21:25.084 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:21:25 compute-1 nova_compute[230488]: 2025-12-07 10:21:25.609 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:21:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:25.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:25 compute-1 nova_compute[230488]: 2025-12-07 10:21:25.616 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:21:25 compute-1 nova_compute[230488]: 2025-12-07 10:21:25.638 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:21:25 compute-1 nova_compute[230488]: 2025-12-07 10:21:25.640 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:21:25 compute-1 nova_compute[230488]: 2025-12-07 10:21:25.641 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:21:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/337463090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:21:25 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3618585595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:21:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:26.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:26 compute-1 ceph-mon[80077]: pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:27 compute-1 sudo[248444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:21:27 compute-1 sudo[248444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:21:27 compute-1 sudo[248444]: pam_unix(sudo:session): session closed for user root
Dec 07 10:21:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:27.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:27 compute-1 nova_compute[230488]: 2025-12-07 10:21:27.641 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:21:27 compute-1 nova_compute[230488]: 2025-12-07 10:21:27.641 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:21:27 compute-1 nova_compute[230488]: 2025-12-07 10:21:27.642 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:21:27 compute-1 nova_compute[230488]: 2025-12-07 10:21:27.642 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:21:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:21:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:21:28 compute-1 nova_compute[230488]: 2025-12-07 10:21:28.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:21:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:28.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:28 compute-1 ceph-mon[80077]: pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:21:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:29.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:29 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3811370647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:21:30 compute-1 nova_compute[230488]: 2025-12-07 10:21:30.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:21:30 compute-1 nova_compute[230488]: 2025-12-07 10:21:30.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:21:30 compute-1 nova_compute[230488]: 2025-12-07 10:21:30.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:21:30 compute-1 nova_compute[230488]: 2025-12-07 10:21:30.293 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:21:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:30.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:30 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/485893741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:21:30 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1419933007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:21:30 compute-1 ceph-mon[80077]: pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:31 compute-1 nova_compute[230488]: 2025-12-07 10:21:31.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:21:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:31.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:31 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2891392698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:21:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:32.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:32 compute-1 ceph-mon[80077]: pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:21:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:33.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:34.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:34 compute-1 ceph-mon[80077]: pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:21:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:35.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:36.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:36 compute-1 ceph-mon[80077]: pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:37 compute-1 nova_compute[230488]: 2025-12-07 10:21:37.265 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:21:37 compute-1 podman[248474]: 2025-12-07 10:21:37.623245417 +0000 UTC m=+0.121948118 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 07 10:21:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:37.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:21:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:38.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:21:38.657 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:21:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:21:38.658 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:21:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:21:38.658 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:21:38 compute-1 ceph-mon[80077]: pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:21:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:39.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:40.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:40 compute-1 ceph-mon[80077]: pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:41.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:42.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:21:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:21:43 compute-1 ceph-mon[80077]: pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:43 compute-1 podman[248503]: 2025-12-07 10:21:43.587174212 +0000 UTC m=+0.086794392 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 07 10:21:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:43.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:44.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:44 compute-1 ceph-mon[80077]: pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:21:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:45.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:46.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:46 compute-1 ceph-mon[80077]: pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:47 compute-1 sudo[248526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:21:47 compute-1 sudo[248526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:21:47 compute-1 sudo[248526]: pam_unix(sudo:session): session closed for user root
Dec 07 10:21:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:47.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:21:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:48.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:48 compute-1 ceph-mon[80077]: pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:21:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:49.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:50.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:50 compute-1 ceph-mon[80077]: pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:51.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:52.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:52 compute-1 podman[248554]: 2025-12-07 10:21:52.590573319 +0000 UTC m=+0.074601941 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 07 10:21:52 compute-1 ceph-mon[80077]: pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:21:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:53.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:54.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:54 compute-1 ceph-mon[80077]: pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:21:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:21:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:55.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:21:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:56.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:56 compute-1 ceph-mon[80077]: pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:21:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:57.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:21:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:21:57 compute-1 sshd-session[248576]: Invalid user postgres from 104.248.193.130 port 49118
Dec 07 10:21:57 compute-1 sshd-session[248576]: Connection closed by invalid user postgres 104.248.193.130 port 49118 [preauth]
Dec 07 10:21:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:21:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:21:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:21:58.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:21:58 compute-1 ceph-mon[80077]: pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:21:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:21:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:21:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:21:59.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:00.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:00 compute-1 ceph-mon[80077]: pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:01.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:02.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:02 compute-1 ceph-mon[80077]: pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:22:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:03.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/3096460374' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:22:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/3096460374' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:22:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:04.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:04 compute-1 ceph-mon[80077]: pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:22:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:05.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:22:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:06.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:22:06 compute-1 ceph-mon[80077]: pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:07 compute-1 sudo[248583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:22:07 compute-1 sudo[248583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:22:07 compute-1 sudo[248583]: pam_unix(sudo:session): session closed for user root
Dec 07 10:22:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:07.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:07 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:22:08 compute-1 sudo[248608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:22:08 compute-1 sudo[248608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:22:08 compute-1 sudo[248608]: pam_unix(sudo:session): session closed for user root
Dec 07 10:22:08 compute-1 sudo[248640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:22:08 compute-1 sudo[248640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:22:08 compute-1 podman[248632]: 2025-12-07 10:22:08.354247802 +0000 UTC m=+0.151225295 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec 07 10:22:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:08.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:08 compute-1 sudo[248640]: pam_unix(sudo:session): session closed for user root
Dec 07 10:22:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:22:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:22:09 compute-1 ceph-mon[80077]: pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:22:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:22:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:22:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:22:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:22:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:22:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:22:09 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:22:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:10.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:10 compute-1 ceph-mon[80077]: pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:22:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:11.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:12.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:12 compute-1 ceph-mon[80077]: pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:22:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:22:12 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:22:13 compute-1 ceph-mon[80077]: pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:22:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:13.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:22:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:14.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:22:14 compute-1 sudo[248723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:22:14 compute-1 sudo[248723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:22:14 compute-1 sudo[248723]: pam_unix(sudo:session): session closed for user root
Dec 07 10:22:14 compute-1 podman[248747]: 2025-12-07 10:22:14.567255221 +0000 UTC m=+0.069753148 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 07 10:22:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:22:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:22:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:15.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:16 compute-1 ceph-mon[80077]: pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:22:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:16.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:17.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:17 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:22:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:22:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:18.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:22:18 compute-1 ceph-mon[80077]: pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:22:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:19.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:20.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:20 compute-1 ceph-mon[80077]: pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:22:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:21.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:22.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:22 compute-1 ceph-mon[80077]: pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:22 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:22:23 compute-1 podman[248773]: 2025-12-07 10:22:23.596303284 +0000 UTC m=+0.085562389 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 07 10:22:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:23.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:24.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:24 compute-1 ceph-mon[80077]: pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:25 compute-1 nova_compute[230488]: 2025-12-07 10:22:25.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:25.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:26 compute-1 nova_compute[230488]: 2025-12-07 10:22:26.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:26 compute-1 nova_compute[230488]: 2025-12-07 10:22:26.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:26 compute-1 nova_compute[230488]: 2025-12-07 10:22:26.293 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:22:26 compute-1 nova_compute[230488]: 2025-12-07 10:22:26.293 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:22:26 compute-1 nova_compute[230488]: 2025-12-07 10:22:26.293 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:22:26 compute-1 nova_compute[230488]: 2025-12-07 10:22:26.293 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:22:26 compute-1 nova_compute[230488]: 2025-12-07 10:22:26.294 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:22:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:26.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:26 compute-1 ceph-mon[80077]: pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:22:26 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:22:26 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1145090920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:22:26 compute-1 nova_compute[230488]: 2025-12-07 10:22:26.803 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:22:27 compute-1 nova_compute[230488]: 2025-12-07 10:22:27.012 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:22:27 compute-1 nova_compute[230488]: 2025-12-07 10:22:27.013 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5198MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:22:27 compute-1 nova_compute[230488]: 2025-12-07 10:22:27.014 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:22:27 compute-1 nova_compute[230488]: 2025-12-07 10:22:27.014 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:22:27 compute-1 nova_compute[230488]: 2025-12-07 10:22:27.128 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:22:27 compute-1 nova_compute[230488]: 2025-12-07 10:22:27.129 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:22:27 compute-1 nova_compute[230488]: 2025-12-07 10:22:27.264 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:22:27 compute-1 sudo[248819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:22:27 compute-1 sudo[248819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:22:27 compute-1 sudo[248819]: pam_unix(sudo:session): session closed for user root
Dec 07 10:22:27 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1145090920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:22:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:22:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:27.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:22:27 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3401383796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:22:27 compute-1 nova_compute[230488]: 2025-12-07 10:22:27.754 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:22:27 compute-1 nova_compute[230488]: 2025-12-07 10:22:27.760 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:22:27 compute-1 nova_compute[230488]: 2025-12-07 10:22:27.789 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:22:27 compute-1 nova_compute[230488]: 2025-12-07 10:22:27.790 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:22:27 compute-1 nova_compute[230488]: 2025-12-07 10:22:27.790 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:22:27 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:22:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:28.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:28 compute-1 ceph-mon[80077]: pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:28 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3401383796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:22:28 compute-1 nova_compute[230488]: 2025-12-07 10:22:28.790 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:28 compute-1 nova_compute[230488]: 2025-12-07 10:22:28.791 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:28 compute-1 nova_compute[230488]: 2025-12-07 10:22:28.791 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:28 compute-1 nova_compute[230488]: 2025-12-07 10:22:28.792 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:22:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:29.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:22:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:30.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:22:30 compute-1 ceph-mon[80077]: pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:22:30 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3512306957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:22:30 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1774550792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:22:31 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1220567586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:22:31 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/685150083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:22:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:31.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:32 compute-1 nova_compute[230488]: 2025-12-07 10:22:32.272 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:32 compute-1 nova_compute[230488]: 2025-12-07 10:22:32.273 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:22:32 compute-1 nova_compute[230488]: 2025-12-07 10:22:32.273 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:22:32 compute-1 nova_compute[230488]: 2025-12-07 10:22:32.306 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:22:32 compute-1 nova_compute[230488]: 2025-12-07 10:22:32.307 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:32.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:32 compute-1 ceph-mon[80077]: pgmap v1213: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:22:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:33.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:34.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:34 compute-1 ceph-mon[80077]: pgmap v1214: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:35 compute-1 ceph-mon[80077]: pgmap v1215: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:22:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:35.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:36 compute-1 nova_compute[230488]: 2025-12-07 10:22:36.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:36 compute-1 nova_compute[230488]: 2025-12-07 10:22:36.305 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:36 compute-1 nova_compute[230488]: 2025-12-07 10:22:36.306 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 07 10:22:36 compute-1 nova_compute[230488]: 2025-12-07 10:22:36.329 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 07 10:22:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:36.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:37.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:22:38 compute-1 ceph-mon[80077]: pgmap v1216: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:38.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:38 compute-1 podman[248871]: 2025-12-07 10:22:38.649038878 +0000 UTC m=+0.141088149 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:22:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:22:38.658 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:22:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:22:38.659 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:22:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:22:38.659 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:22:39 compute-1 nova_compute[230488]: 2025-12-07 10:22:39.326 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:39.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:40 compute-1 ceph-mon[80077]: pgmap v1217: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:22:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:40.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:41 compute-1 nova_compute[230488]: 2025-12-07 10:22:41.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:22:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:41.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:22:42 compute-1 ceph-mon[80077]: pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:22:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:42.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:42 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:22:43 compute-1 nova_compute[230488]: 2025-12-07 10:22:43.283 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:22:43 compute-1 nova_compute[230488]: 2025-12-07 10:22:43.284 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 07 10:22:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:43.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:43 compute-1 sshd-session[248899]: Invalid user postgres from 104.248.193.130 port 48132
Dec 07 10:22:43 compute-1 sshd-session[248899]: Connection closed by invalid user postgres 104.248.193.130 port 48132 [preauth]
Dec 07 10:22:44 compute-1 ceph-mon[80077]: pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:44.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.543691) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102965543743, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1556, "num_deletes": 506, "total_data_size": 3125556, "memory_usage": 3173768, "flush_reason": "Manual Compaction"}
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102965562121, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 2035465, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35653, "largest_seqno": 37204, "table_properties": {"data_size": 2029215, "index_size": 3004, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16402, "raw_average_key_size": 18, "raw_value_size": 2014557, "raw_average_value_size": 2323, "num_data_blocks": 131, "num_entries": 867, "num_filter_entries": 867, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102863, "oldest_key_time": 1765102863, "file_creation_time": 1765102965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 18499 microseconds, and 10928 cpu microseconds.
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.562187) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 2035465 bytes OK
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.562219) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.564054) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.564128) EVENT_LOG_v1 {"time_micros": 1765102965564114, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.564172) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 3117398, prev total WAL file size 3117398, number of live WAL files 2.
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.565974) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(1987KB)], [66(14MB)]
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102965566045, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 17580301, "oldest_snapshot_seqno": -1}
Dec 07 10:22:45 compute-1 podman[248902]: 2025-12-07 10:22:45.606926791 +0000 UTC m=+0.096611980 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6655 keys, 15337418 bytes, temperature: kUnknown
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102965655696, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 15337418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15292179, "index_size": 27527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 174886, "raw_average_key_size": 26, "raw_value_size": 15171374, "raw_average_value_size": 2279, "num_data_blocks": 1087, "num_entries": 6655, "num_filter_entries": 6655, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765102965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.655967) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 15337418 bytes
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.657368) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.9 rd, 170.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 14.8 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(16.2) write-amplify(7.5) OK, records in: 7684, records dropped: 1029 output_compression: NoCompression
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.657384) EVENT_LOG_v1 {"time_micros": 1765102965657375, "job": 40, "event": "compaction_finished", "compaction_time_micros": 89734, "compaction_time_cpu_micros": 44839, "output_level": 6, "num_output_files": 1, "total_output_size": 15337418, "num_input_records": 7684, "num_output_records": 6655, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102965657898, "job": 40, "event": "table_file_deletion", "file_number": 68}
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765102965660945, "job": 40, "event": "table_file_deletion", "file_number": 66}
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.565310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.661046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.661053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.661055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.661057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:22:45 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:22:45.661059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:22:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:45.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:46.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:46 compute-1 ceph-mon[80077]: pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:22:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:47.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:48.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:22:49 compute-1 ceph-mon[80077]: pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:49 compute-1 sudo[248924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:22:49 compute-1 sudo[248924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:22:49 compute-1 sudo[248924]: pam_unix(sudo:session): session closed for user root
Dec 07 10:22:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:49.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:50 compute-1 ceph-mon[80077]: pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:22:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:50.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:51.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:52 compute-1 ceph-mon[80077]: pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:52.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:53.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:22:54 compute-1 ceph-mon[80077]: pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:54.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:54 compute-1 podman[248952]: 2025-12-07 10:22:54.594720982 +0000 UTC m=+0.081445836 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 07 10:22:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:55.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:22:56 compute-1 ceph-mon[80077]: pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:22:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:56.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:22:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:57.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:58 compute-1 ceph-mon[80077]: pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:22:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:22:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:22:58.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:22:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:22:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:22:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:22:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:22:59.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:00.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:00 compute-1 ceph-mon[80077]: pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:23:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:02.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:02 compute-1 ceph-mon[80077]: pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:23:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 07 10:23:02 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1035656807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:23:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 07 10:23:02 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1035656807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:23:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/1035656807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:23:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/1035656807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:23:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:04 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:23:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:04.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:04 compute-1 ceph-mon[80077]: pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:23:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:05.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:06.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:06 compute-1 ceph-mon[80077]: pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:23:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:07.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:08.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:08 compute-1 ceph-mon[80077]: pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:23:09 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:23:09 compute-1 sudo[248978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:23:09 compute-1 sudo[248978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:23:09 compute-1 sudo[248978]: pam_unix(sudo:session): session closed for user root
Dec 07 10:23:09 compute-1 podman[249002]: 2025-12-07 10:23:09.494011633 +0000 UTC m=+0.132637530 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:23:09 compute-1 ceph-mon[80077]: pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:23:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:09.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:10.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:11.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:12.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:12 compute-1 ceph-mon[80077]: pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:23:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:23:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:13.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:13 compute-1 ceph-mon[80077]: pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:23:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:23:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:14.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:14 compute-1 sudo[249032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:23:14 compute-1 sudo[249032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:23:14 compute-1 sudo[249032]: pam_unix(sudo:session): session closed for user root
Dec 07 10:23:14 compute-1 sudo[249057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:23:14 compute-1 sudo[249057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:23:15 compute-1 sudo[249057]: pam_unix(sudo:session): session closed for user root
Dec 07 10:23:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:23:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:23:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:23:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:23:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:23:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:23:15 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:23:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:15.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:16 compute-1 ceph-mon[80077]: pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:23:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:16.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:16 compute-1 podman[249114]: 2025-12-07 10:23:16.594492583 +0000 UTC m=+0.079961495 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:23:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:17.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:18 compute-1 ceph-mon[80077]: pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:23:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:18.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:23:19 compute-1 nova_compute[230488]: 2025-12-07 10:23:19.493 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:23:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:19.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:20 compute-1 sudo[249137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:23:20 compute-1 sudo[249137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:23:20 compute-1 sudo[249137]: pam_unix(sudo:session): session closed for user root
Dec 07 10:23:20 compute-1 ceph-mon[80077]: pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:23:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:23:20 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:23:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:20.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec 07 10:23:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec 07 10:23:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Dec 07 10:23:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Dec 07 10:23:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Dec 07 10:23:20 compute-1 radosgw[84964]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Dec 07 10:23:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:21.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:22.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:22 compute-1 ceph-mon[80077]: pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:23:23 compute-1 ceph-mon[80077]: pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:23:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:23.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:23:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:24.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:25 compute-1 podman[249164]: 2025-12-07 10:23:25.569849696 +0000 UTC m=+0.065871963 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 07 10:23:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:25.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:26 compute-1 nova_compute[230488]: 2025-12-07 10:23:26.301 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:23:26 compute-1 ceph-mon[80077]: pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Dec 07 10:23:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:26.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:23:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:27.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:27.958100) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103007958205, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 678, "num_deletes": 251, "total_data_size": 1253289, "memory_usage": 1272656, "flush_reason": "Manual Compaction"}
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103007966826, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 580314, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37209, "largest_seqno": 37882, "table_properties": {"data_size": 577345, "index_size": 877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8026, "raw_average_key_size": 20, "raw_value_size": 571159, "raw_average_value_size": 1475, "num_data_blocks": 38, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765102966, "oldest_key_time": 1765102966, "file_creation_time": 1765103007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 8786 microseconds, and 4910 cpu microseconds.
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:27.966901) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 580314 bytes OK
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:27.966931) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:27.968678) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:27.968708) EVENT_LOG_v1 {"time_micros": 1765103007968698, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:27.968739) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1249624, prev total WAL file size 1249624, number of live WAL files 2.
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:27.969923) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(566KB)], [69(14MB)]
Dec 07 10:23:27 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103007970035, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 15917732, "oldest_snapshot_seqno": -1}
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6541 keys, 12064383 bytes, temperature: kUnknown
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103008058873, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 12064383, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12024231, "index_size": 22696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 172694, "raw_average_key_size": 26, "raw_value_size": 11909706, "raw_average_value_size": 1820, "num_data_blocks": 887, "num_entries": 6541, "num_filter_entries": 6541, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765103007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:28.059274) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12064383 bytes
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:28.060756) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.9 rd, 135.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.6 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(48.2) write-amplify(20.8) OK, records in: 7042, records dropped: 501 output_compression: NoCompression
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:28.060781) EVENT_LOG_v1 {"time_micros": 1765103008060769, "job": 42, "event": "compaction_finished", "compaction_time_micros": 88959, "compaction_time_cpu_micros": 55886, "output_level": 6, "num_output_files": 1, "total_output_size": 12064383, "num_input_records": 7042, "num_output_records": 6541, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103008061044, "job": 42, "event": "table_file_deletion", "file_number": 71}
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103008065201, "job": 42, "event": "table_file_deletion", "file_number": 69}
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:27.969762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:28.065309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:28.065316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:28.065318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:28.065320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:23:28 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:23:28.065322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:23:28 compute-1 nova_compute[230488]: 2025-12-07 10:23:28.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:23:28 compute-1 nova_compute[230488]: 2025-12-07 10:23:28.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:23:28 compute-1 nova_compute[230488]: 2025-12-07 10:23:28.311 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:23:28 compute-1 nova_compute[230488]: 2025-12-07 10:23:28.312 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:23:28 compute-1 nova_compute[230488]: 2025-12-07 10:23:28.313 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:23:28 compute-1 nova_compute[230488]: 2025-12-07 10:23:28.313 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:23:28 compute-1 nova_compute[230488]: 2025-12-07 10:23:28.314 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:23:28 compute-1 sshd-session[249185]: Invalid user postgres from 104.248.193.130 port 39326
Dec 07 10:23:28 compute-1 ceph-mon[80077]: pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Dec 07 10:23:28 compute-1 sshd-session[249185]: Connection closed by invalid user postgres 104.248.193.130 port 39326 [preauth]
Dec 07 10:23:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:28.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:28 compute-1 nova_compute[230488]: 2025-12-07 10:23:28.807 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:23:29 compute-1 nova_compute[230488]: 2025-12-07 10:23:29.063 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:23:29 compute-1 nova_compute[230488]: 2025-12-07 10:23:29.066 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5168MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:23:29 compute-1 nova_compute[230488]: 2025-12-07 10:23:29.067 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:23:29 compute-1 nova_compute[230488]: 2025-12-07 10:23:29.067 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:23:29 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:23:29 compute-1 sudo[249210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:23:29 compute-1 sudo[249210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:23:29 compute-1 sudo[249210]: pam_unix(sudo:session): session closed for user root
Dec 07 10:23:29 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1217218982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:23:29 compute-1 nova_compute[230488]: 2025-12-07 10:23:29.555 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:23:29 compute-1 nova_compute[230488]: 2025-12-07 10:23:29.556 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:23:29 compute-1 nova_compute[230488]: 2025-12-07 10:23:29.588 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing inventories for resource provider 58b51610-0751-43d9-94a3-66540bffec81 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 07 10:23:29 compute-1 nova_compute[230488]: 2025-12-07 10:23:29.805 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Updating ProviderTree inventory for provider 58b51610-0751-43d9-94a3-66540bffec81 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 07 10:23:29 compute-1 nova_compute[230488]: 2025-12-07 10:23:29.805 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Updating inventory in ProviderTree for provider 58b51610-0751-43d9-94a3-66540bffec81 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 07 10:23:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:29.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:29 compute-1 nova_compute[230488]: 2025-12-07 10:23:29.907 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing aggregate associations for resource provider 58b51610-0751-43d9-94a3-66540bffec81, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 07 10:23:29 compute-1 nova_compute[230488]: 2025-12-07 10:23:29.971 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing trait associations for resource provider 58b51610-0751-43d9-94a3-66540bffec81, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 07 10:23:30 compute-1 nova_compute[230488]: 2025-12-07 10:23:30.044 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:23:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:23:30 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1405569904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:23:30 compute-1 nova_compute[230488]: 2025-12-07 10:23:30.493 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:23:30 compute-1 nova_compute[230488]: 2025-12-07 10:23:30.500 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:23:30 compute-1 ceph-mon[80077]: pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 171 op/s
Dec 07 10:23:30 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1405569904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:23:30 compute-1 nova_compute[230488]: 2025-12-07 10:23:30.523 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:23:30 compute-1 nova_compute[230488]: 2025-12-07 10:23:30.526 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:23:30 compute-1 nova_compute[230488]: 2025-12-07 10:23:30.526 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:23:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:30.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:31 compute-1 nova_compute[230488]: 2025-12-07 10:23:31.526 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:23:31 compute-1 nova_compute[230488]: 2025-12-07 10:23:31.527 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:23:31 compute-1 nova_compute[230488]: 2025-12-07 10:23:31.528 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:23:31 compute-1 nova_compute[230488]: 2025-12-07 10:23:31.528 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:23:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:23:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:31.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:23:32 compute-1 nova_compute[230488]: 2025-12-07 10:23:32.272 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:23:32 compute-1 nova_compute[230488]: 2025-12-07 10:23:32.272 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:23:32 compute-1 nova_compute[230488]: 2025-12-07 10:23:32.272 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:23:32 compute-1 nova_compute[230488]: 2025-12-07 10:23:32.295 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:23:32 compute-1 ceph-mon[80077]: pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Dec 07 10:23:32 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2036835216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:23:32 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2359469985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:23:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:32.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:33 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/785250674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:23:33 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3881808361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:23:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:33.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:33 compute-1 nova_compute[230488]: 2025-12-07 10:23:33.913 230492 DEBUG oslo_concurrency.processutils [None req-df4cc2da-b1d5-430c-beef-9facdb501d68 24eb8006efd340518863613cf711b1e6 f2774f82d095448bbb688700083cf81d - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:23:33 compute-1 nova_compute[230488]: 2025-12-07 10:23:33.948 230492 DEBUG oslo_concurrency.processutils [None req-df4cc2da-b1d5-430c-beef-9facdb501d68 24eb8006efd340518863613cf711b1e6 f2774f82d095448bbb688700083cf81d - - default default] CMD "env LANG=C uptime" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:23:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:23:34 compute-1 nova_compute[230488]: 2025-12-07 10:23:34.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:23:34 compute-1 ceph-mon[80077]: pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Dec 07 10:23:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:34.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:35.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:36 compute-1 ceph-mon[80077]: pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 171 op/s
Dec 07 10:23:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:36.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:38 compute-1 sshd-session[249262]: Connection closed by authenticating user root 161.35.84.99 port 39098 [preauth]
Dec 07 10:23:38 compute-1 ceph-mon[80077]: pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:23:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:38.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:23:38.660 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:23:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:23:38.660 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:23:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:23:38.661 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:23:39 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:23:39 compute-1 nova_compute[230488]: 2025-12-07 10:23:39.265 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:23:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:39.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:39 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:23:39.892 142603 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '76:bc:71', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ce:90:a9:76:77:00'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 07 10:23:39 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:23:39.894 142603 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 07 10:23:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:40.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:40 compute-1 ceph-mon[80077]: pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:23:40 compute-1 podman[249266]: 2025-12-07 10:23:40.650186816 +0000 UTC m=+0.143003144 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 07 10:23:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:23:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:41.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:23:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:42.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:42 compute-1 ceph-mon[80077]: pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:23:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:23:42 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:23:42.897 142603 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e231b22a-cdf9-44dd-ad96-a8e48b3d52da, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 07 10:23:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:43.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:44 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:23:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:23:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:44.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:23:44 compute-1 ceph-mon[80077]: pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:23:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:45.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:46 compute-1 ceph-mon[80077]: pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:23:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:46.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:47 compute-1 podman[249295]: 2025-12-07 10:23:47.612717616 +0000 UTC m=+0.108422902 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 07 10:23:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.002000053s ======
Dec 07 10:23:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:47.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 07 10:23:48 compute-1 ceph-mon[80077]: pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:23:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:48.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:23:49 compute-1 sudo[249316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:23:49 compute-1 sudo[249316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:23:49 compute-1 sudo[249316]: pam_unix(sudo:session): session closed for user root
Dec 07 10:23:49 compute-1 ceph-mon[80077]: pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:23:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:49.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:50.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:51.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:52 compute-1 ceph-mon[80077]: pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:23:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:52.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:53.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:23:54 compute-1 ceph-mon[80077]: pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:23:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:54.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:55.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:56 compute-1 ceph-mon[80077]: pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:23:56 compute-1 podman[249345]: 2025-12-07 10:23:56.592021786 +0000 UTC m=+0.084822109 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 07 10:23:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:23:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:56.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:23:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:23:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:57.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:58 compute-1 ceph-mon[80077]: pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:23:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:23:58.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:23:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:23:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:23:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:23:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:23:59.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:00 compute-1 ceph-mon[80077]: pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:24:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:00.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:01.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:02 compute-1 ceph-mon[80077]: pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:02.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2755057413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:24:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2755057413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:24:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:03.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:04 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:24:04 compute-1 ceph-mon[80077]: pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:04.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:05 compute-1 ceph-mon[80077]: pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:24:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:05.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:06.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:07 compute-1 ceph-mon[80077]: pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:07.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:08.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:09 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:24:09 compute-1 sudo[249370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:24:09 compute-1 sudo[249370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:24:09 compute-1 sudo[249370]: pam_unix(sudo:session): session closed for user root
Dec 07 10:24:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:09.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:10 compute-1 ceph-mon[80077]: pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:24:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:10.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:11 compute-1 podman[249396]: 2025-12-07 10:24:11.634991844 +0000 UTC m=+0.124164491 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 07 10:24:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:11.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:12 compute-1 ceph-mon[80077]: pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:24:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:12.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:13.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:24:14 compute-1 ceph-mon[80077]: pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:14.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:15 compute-1 sshd-session[249424]: Invalid user postgres from 104.248.193.130 port 35062
Dec 07 10:24:15 compute-1 sshd-session[249424]: Connection closed by invalid user postgres 104.248.193.130 port 35062 [preauth]
Dec 07 10:24:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:15.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:16 compute-1 ceph-mon[80077]: pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:24:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:16.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:17 compute-1 ceph-mon[80077]: pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:17.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:18 compute-1 podman[249428]: 2025-12-07 10:24:18.580717898 +0000 UTC m=+0.079141845 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 07 10:24:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:18.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:24:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:19.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:20 compute-1 ceph-mon[80077]: pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:24:20 compute-1 sudo[249449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:24:20 compute-1 sudo[249449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:24:20 compute-1 sudo[249449]: pam_unix(sudo:session): session closed for user root
Dec 07 10:24:20 compute-1 sudo[249474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:24:20 compute-1 sudo[249474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:24:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:20.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:21 compute-1 sudo[249474]: pam_unix(sudo:session): session closed for user root
Dec 07 10:24:21 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:24:21 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:24:21 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:24:21 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:24:21 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:24:21 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:24:21 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:24:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:21.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:22 compute-1 ceph-mon[80077]: pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:24:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:22.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:23.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:24:24 compute-1 ceph-mon[80077]: pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:24:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:25.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:25.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:26 compute-1 sudo[249535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:24:26 compute-1 sudo[249535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:24:26 compute-1 sudo[249535]: pam_unix(sudo:session): session closed for user root
Dec 07 10:24:26 compute-1 ceph-mon[80077]: pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:24:26 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:24:26 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:24:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:27.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:24:27 compute-1 podman[249560]: 2025-12-07 10:24:27.588428833 +0000 UTC m=+0.067716354 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 07 10:24:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:27.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:28 compute-1 nova_compute[230488]: 2025-12-07 10:24:28.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:24:28 compute-1 ceph-mon[80077]: pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:24:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:29.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:29 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:24:29 compute-1 nova_compute[230488]: 2025-12-07 10:24:29.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:24:29 compute-1 sudo[249580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:24:29 compute-1 sudo[249580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:24:29 compute-1 sudo[249580]: pam_unix(sudo:session): session closed for user root
Dec 07 10:24:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:24:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:29.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:24:30 compute-1 nova_compute[230488]: 2025-12-07 10:24:30.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:24:30 compute-1 nova_compute[230488]: 2025-12-07 10:24:30.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:24:30 compute-1 nova_compute[230488]: 2025-12-07 10:24:30.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:24:30 compute-1 nova_compute[230488]: 2025-12-07 10:24:30.303 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:24:30 compute-1 nova_compute[230488]: 2025-12-07 10:24:30.304 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:24:30 compute-1 nova_compute[230488]: 2025-12-07 10:24:30.304 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:24:30 compute-1 nova_compute[230488]: 2025-12-07 10:24:30.305 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:24:30 compute-1 nova_compute[230488]: 2025-12-07 10:24:30.305 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:24:30 compute-1 ceph-mon[80077]: pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:24:30 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:24:30 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1130400221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:24:30 compute-1 nova_compute[230488]: 2025-12-07 10:24:30.787 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:24:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:31.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:31 compute-1 nova_compute[230488]: 2025-12-07 10:24:31.050 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:24:31 compute-1 nova_compute[230488]: 2025-12-07 10:24:31.052 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5165MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:24:31 compute-1 nova_compute[230488]: 2025-12-07 10:24:31.053 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:24:31 compute-1 nova_compute[230488]: 2025-12-07 10:24:31.053 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:24:31 compute-1 nova_compute[230488]: 2025-12-07 10:24:31.146 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:24:31 compute-1 nova_compute[230488]: 2025-12-07 10:24:31.147 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:24:31 compute-1 nova_compute[230488]: 2025-12-07 10:24:31.174 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:24:31 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:24:31 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/32291253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:24:31 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1130400221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:24:31 compute-1 nova_compute[230488]: 2025-12-07 10:24:31.646 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:24:31 compute-1 nova_compute[230488]: 2025-12-07 10:24:31.655 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:24:31 compute-1 nova_compute[230488]: 2025-12-07 10:24:31.679 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:24:31 compute-1 nova_compute[230488]: 2025-12-07 10:24:31.682 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:24:31 compute-1 nova_compute[230488]: 2025-12-07 10:24:31.682 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:24:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:31.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:32 compute-1 ceph-mon[80077]: pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Dec 07 10:24:32 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/32291253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:24:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:33.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:33 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/637913036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:24:33 compute-1 nova_compute[230488]: 2025-12-07 10:24:33.683 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:24:33 compute-1 nova_compute[230488]: 2025-12-07 10:24:33.684 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:24:33 compute-1 nova_compute[230488]: 2025-12-07 10:24:33.684 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:24:33 compute-1 nova_compute[230488]: 2025-12-07 10:24:33.701 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:24:33 compute-1 nova_compute[230488]: 2025-12-07 10:24:33.702 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:24:33 compute-1 nova_compute[230488]: 2025-12-07 10:24:33.702 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:24:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:33.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:24:34 compute-1 nova_compute[230488]: 2025-12-07 10:24:34.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:24:34 compute-1 ceph-mon[80077]: pgmap v1274: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3279930394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:24:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1175994644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:24:34 compute-1 sshd-session[249652]: Connection closed by authenticating user root 161.35.84.99 port 43874 [preauth]
Dec 07 10:24:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:24:34 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/565244694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:24:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:35.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:35 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/565244694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:24:35 compute-1 ceph-mon[80077]: pgmap v1275: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:24:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:35.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:36 compute-1 nova_compute[230488]: 2025-12-07 10:24:36.264 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:24:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:37.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:37.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:38 compute-1 ceph-mon[80077]: pgmap v1276: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:24:38.661 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:24:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:24:38.662 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:24:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:24:38.663 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:24:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:39.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:39 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:24:39 compute-1 nova_compute[230488]: 2025-12-07 10:24:39.300 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:24:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:40 compute-1 ceph-mon[80077]: pgmap v1277: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:24:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:41.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:42 compute-1 ceph-mon[80077]: pgmap v1278: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:42 compute-1 podman[249658]: 2025-12-07 10:24:42.643398474 +0000 UTC m=+0.143304521 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 07 10:24:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:43.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:43 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:24:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:43.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:44 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:24:44 compute-1 ceph-mon[80077]: pgmap v1279: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:45.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:45.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:47.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:47 compute-1 ceph-mon[80077]: pgmap v1280: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:24:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:47.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:48 compute-1 ceph-mon[80077]: pgmap v1281: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:49.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:24:49 compute-1 podman[249687]: 2025-12-07 10:24:49.642493909 +0000 UTC m=+0.126181565 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 07 10:24:49 compute-1 sudo[249707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:24:49 compute-1 sudo[249707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:24:49 compute-1 sudo[249707]: pam_unix(sudo:session): session closed for user root
Dec 07 10:24:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:49.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:50 compute-1 ceph-mon[80077]: pgmap v1282: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:24:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:51.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:51.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:52 compute-1 ceph-mon[80077]: pgmap v1283: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:53.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:53 compute-1 ceph-mon[80077]: pgmap v1284: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:53.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:24:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:55.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:55.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:56 compute-1 ceph-mon[80077]: pgmap v1285: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:24:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:24:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:57.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:24:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:57.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:58 compute-1 ceph-mon[80077]: pgmap v1286: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:24:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:24:58 compute-1 podman[249737]: 2025-12-07 10:24:58.590546722 +0000 UTC m=+0.070102119 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 07 10:24:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:24:59.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:24:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:24:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:24:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:24:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:24:59.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:00 compute-1 sshd-session[249757]: Invalid user postgres from 104.248.193.130 port 55184
Dec 07 10:25:00 compute-1 ceph-mon[80077]: pgmap v1287: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:25:00 compute-1 sshd-session[249757]: Connection closed by invalid user postgres 104.248.193.130 port 55184 [preauth]
Dec 07 10:25:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:01.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:01.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:02 compute-1 ceph-mon[80077]: pgmap v1288: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:03.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2005296220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:25:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/2005296220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:25:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:03.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:04 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:25:04 compute-1 ceph-mon[80077]: pgmap v1289: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:05.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:05.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:06 compute-1 ceph-mon[80077]: pgmap v1290: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:25:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:07.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:07.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:08 compute-1 ceph-mon[80077]: pgmap v1291: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:09.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:09 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:25:09 compute-1 sudo[249764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:25:09 compute-1 sudo[249764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:25:09 compute-1 sudo[249764]: pam_unix(sudo:session): session closed for user root
Dec 07 10:25:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:09.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:10 compute-1 ceph-mon[80077]: pgmap v1292: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:25:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:25:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:11.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:25:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:11.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:12 compute-1 ceph-mon[80077]: pgmap v1293: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:25:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:13.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:13 compute-1 podman[249791]: 2025-12-07 10:25:13.674199435 +0000 UTC m=+0.165785063 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 07 10:25:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:13.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:25:14 compute-1 ceph-mon[80077]: pgmap v1294: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:15.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:15.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:16 compute-1 ceph-mon[80077]: pgmap v1295: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:25:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:17.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:17 compute-1 ceph-mon[80077]: pgmap v1296: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:17.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:19.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:25:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:19.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:20 compute-1 ceph-mon[80077]: pgmap v1297: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:25:20 compute-1 podman[249821]: 2025-12-07 10:25:20.619046474 +0000 UTC m=+0.114707313 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 07 10:25:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:21.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:21.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:22 compute-1 ceph-mon[80077]: pgmap v1298: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:23.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:23.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:25:24 compute-1 ceph-mon[80077]: pgmap v1299: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:25.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:25.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:26 compute-1 ceph-mon[80077]: pgmap v1300: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:25:26 compute-1 sudo[249846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:25:26 compute-1 sudo[249846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:25:26 compute-1 sudo[249846]: pam_unix(sudo:session): session closed for user root
Dec 07 10:25:26 compute-1 sudo[249871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 07 10:25:26 compute-1 sudo[249871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:25:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:27.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:27 compute-1 podman[249968]: 2025-12-07 10:25:27.393397252 +0000 UTC m=+0.073198603 container exec 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 07 10:25:27 compute-1 podman[249968]: 2025-12-07 10:25:27.52593723 +0000 UTC m=+0.205738541 container exec_died 0adb3b962f9be9f77484cea48a61ddb6dbdac27d2f6f2d11917c2759647cf1b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec 07 10:25:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:27.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:28 compute-1 podman[250089]: 2025-12-07 10:25:28.068347683 +0000 UTC m=+0.063174700 container exec 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 10:25:28 compute-1 podman[250089]: 2025-12-07 10:25:28.105245117 +0000 UTC m=+0.100072104 container exec_died 0a4d2954764c22b86121bac3d11bc9b63e47782ce61bae7b4370d135249c4a00 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 07 10:25:28 compute-1 ceph-mon[80077]: pgmap v1301: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:25:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 07 10:25:28 compute-1 sshd-session[250088]: Connection closed by authenticating user root 161.35.84.99 port 38236 [preauth]
Dec 07 10:25:28 compute-1 podman[250226]: 2025-12-07 10:25:28.699966364 +0000 UTC m=+0.075908727 container exec beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 10:25:28 compute-1 podman[250226]: 2025-12-07 10:25:28.716196315 +0000 UTC m=+0.092138628 container exec_died beb6c210855d5dee9a79b7f50cd4854bb9f0b0f00930d5f252196d17b26aec7f (image=quay.io/ceph/haproxy:2.3, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-haproxy-nfs-cephfs-compute-1-kwciua)
Dec 07 10:25:28 compute-1 podman[250260]: 2025-12-07 10:25:28.856400481 +0000 UTC m=+0.063127159 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 07 10:25:29 compute-1 podman[250309]: 2025-12-07 10:25:29.04593351 +0000 UTC m=+0.094620207 container exec 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, distribution-scope=public, build-date=2023-02-22T09:23:20, vcs-type=git, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vendor=Red Hat, Inc., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec 07 10:25:29 compute-1 podman[250309]: 2025-12-07 10:25:29.060965809 +0000 UTC m=+0.109652416 container exec_died 63d2276d352335c300047c0a6c5b69b30b1c28b930e12681d437c0d1d4a9599f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-75f4c9fd-539a-5e17-b55a-0a12a4e2736c-keepalived-nfs-cephfs-compute-1-gawwbe, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.component=keepalived-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, release=1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, description=keepalived for Ceph, name=keepalived)
Dec 07 10:25:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:29.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:29 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:25:29 compute-1 sudo[249871]: pam_unix(sudo:session): session closed for user root
Dec 07 10:25:29 compute-1 sudo[250342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:25:29 compute-1 sudo[250342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:25:29 compute-1 sudo[250342]: pam_unix(sudo:session): session closed for user root
Dec 07 10:25:29 compute-1 nova_compute[230488]: 2025-12-07 10:25:29.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:25:29 compute-1 nova_compute[230488]: 2025-12-07 10:25:29.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:25:29 compute-1 sudo[250367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:25:29 compute-1 sudo[250367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:25:29 compute-1 sudo[250367]: pam_unix(sudo:session): session closed for user root
Dec 07 10:25:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:29.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:30 compute-1 sudo[250423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:25:30 compute-1 sudo[250423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:25:30 compute-1 sudo[250423]: pam_unix(sudo:session): session closed for user root
Dec 07 10:25:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:25:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:25:30 compute-1 ceph-mon[80077]: pgmap v1302: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:25:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:25:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:25:30 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 07 10:25:30 compute-1 nova_compute[230488]: 2025-12-07 10:25:30.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:25:30 compute-1 nova_compute[230488]: 2025-12-07 10:25:30.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:25:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:31.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 07 10:25:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:25:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:25:31 compute-1 ceph-mon[80077]: pgmap v1303: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:25:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:25:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:25:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:25:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:25:31 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:25:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:31.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:32 compute-1 nova_compute[230488]: 2025-12-07 10:25:32.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:25:32 compute-1 nova_compute[230488]: 2025-12-07 10:25:32.303 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:25:32 compute-1 nova_compute[230488]: 2025-12-07 10:25:32.303 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:25:32 compute-1 nova_compute[230488]: 2025-12-07 10:25:32.304 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:25:32 compute-1 nova_compute[230488]: 2025-12-07 10:25:32.304 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:25:32 compute-1 nova_compute[230488]: 2025-12-07 10:25:32.305 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:25:32 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:25:32 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1637816441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:25:32 compute-1 nova_compute[230488]: 2025-12-07 10:25:32.787 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:25:32 compute-1 nova_compute[230488]: 2025-12-07 10:25:32.989 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:25:32 compute-1 nova_compute[230488]: 2025-12-07 10:25:32.990 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5140MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:25:32 compute-1 nova_compute[230488]: 2025-12-07 10:25:32.990 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:25:32 compute-1 nova_compute[230488]: 2025-12-07 10:25:32.991 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:25:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:33.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:33 compute-1 nova_compute[230488]: 2025-12-07 10:25:33.316 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:25:33 compute-1 nova_compute[230488]: 2025-12-07 10:25:33.317 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:25:33 compute-1 nova_compute[230488]: 2025-12-07 10:25:33.334 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:25:33 compute-1 ceph-mon[80077]: pgmap v1304: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:25:33 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1637816441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:25:33 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2354144481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:25:33 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:25:33 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/927136780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:25:33 compute-1 nova_compute[230488]: 2025-12-07 10:25:33.833 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:25:33 compute-1 nova_compute[230488]: 2025-12-07 10:25:33.840 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:25:33 compute-1 nova_compute[230488]: 2025-12-07 10:25:33.871 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:25:33 compute-1 nova_compute[230488]: 2025-12-07 10:25:33.874 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:25:33 compute-1 nova_compute[230488]: 2025-12-07 10:25:33.874 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:25:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:33.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:25:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/927136780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:25:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3408667319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:25:34 compute-1 nova_compute[230488]: 2025-12-07 10:25:34.876 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:25:34 compute-1 nova_compute[230488]: 2025-12-07 10:25:34.877 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:25:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.002000054s ======
Dec 07 10:25:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:35.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 07 10:25:35 compute-1 nova_compute[230488]: 2025-12-07 10:25:35.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:25:35 compute-1 nova_compute[230488]: 2025-12-07 10:25:35.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:25:35 compute-1 nova_compute[230488]: 2025-12-07 10:25:35.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:25:35 compute-1 nova_compute[230488]: 2025-12-07 10:25:35.298 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:25:35 compute-1 ceph-mon[80077]: pgmap v1305: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Dec 07 10:25:35 compute-1 sudo[250495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:25:35 compute-1 sudo[250495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:25:35 compute-1 sudo[250495]: pam_unix(sudo:session): session closed for user root
Dec 07 10:25:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:35.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:36 compute-1 nova_compute[230488]: 2025-12-07 10:25:36.268 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:25:36 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:25:36 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:25:36 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/299295895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:25:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:37.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:37 compute-1 ceph-mon[80077]: pgmap v1306: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:25:37 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1147241048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:25:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:37.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:25:38.661 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:25:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:25:38.662 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:25:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:25:38.662 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:25:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:39.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:39 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:25:39 compute-1 ceph-mon[80077]: pgmap v1307: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:25:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:39.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:40 compute-1 nova_compute[230488]: 2025-12-07 10:25:40.265 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:25:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:41.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:41 compute-1 ceph-mon[80077]: pgmap v1308: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:25:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:41.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:25:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:43.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:43 compute-1 ceph-mon[80077]: pgmap v1309: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:43.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:44 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:25:44 compute-1 podman[250525]: 2025-12-07 10:25:44.621537563 +0000 UTC m=+0.115405622 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec 07 10:25:44 compute-1 ceph-mon[80077]: pgmap v1310: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:25:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:45.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:45.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:47.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:47 compute-1 ceph-mon[80077]: pgmap v1311: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:47.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:48 compute-1 sshd-session[250552]: Invalid user postgres from 104.248.193.130 port 39048
Dec 07 10:25:48 compute-1 sshd-session[250552]: Connection closed by invalid user postgres 104.248.193.130 port 39048 [preauth]
Dec 07 10:25:48 compute-1 ceph-mon[80077]: pgmap v1312: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:25:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:49.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:50 compute-1 sudo[250555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:25:50 compute-1 sudo[250555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:25:50 compute-1 sudo[250555]: pam_unix(sudo:session): session closed for user root
Dec 07 10:25:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:51.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:51 compute-1 ceph-mon[80077]: pgmap v1313: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:25:51 compute-1 podman[250581]: 2025-12-07 10:25:51.569399254 +0000 UTC m=+0.070823808 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 07 10:25:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:51.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:53.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:53 compute-1 ceph-mon[80077]: pgmap v1314: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:54.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:25:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:55.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:55 compute-1 ceph-mon[80077]: pgmap v1315: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:25:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:56.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:25:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:57.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:25:57 compute-1 ceph-mon[80077]: pgmap v1316: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:25:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:25:58.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:25:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:25:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:25:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:25:59.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:25:59 compute-1 ceph-mon[80077]: pgmap v1317: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:25:59 compute-1 podman[250605]: 2025-12-07 10:25:59.585724696 +0000 UTC m=+0.083007311 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 07 10:26:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:00.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:01.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:01 compute-1 ceph-mon[80077]: pgmap v1318: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:26:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:03.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:03 compute-1 ceph-mon[80077]: pgmap v1319: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/26774792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:26:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/26774792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:26:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:04.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:04 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:26:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:05.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:05 compute-1 ceph-mon[80077]: pgmap v1320: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:26:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:06.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:07.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:07 compute-1 ceph-mon[80077]: pgmap v1321: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:08.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:08 compute-1 ceph-mon[80077]: pgmap v1322: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:09 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:26:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:09.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:10.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:10 compute-1 sudo[250628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:26:10 compute-1 sudo[250628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:26:10 compute-1 sudo[250628]: pam_unix(sudo:session): session closed for user root
Dec 07 10:26:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:11.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:11 compute-1 ceph-mon[80077]: pgmap v1323: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:26:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:12.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:26:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:13.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:13 compute-1 ceph-mon[80077]: pgmap v1324: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:26:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:14.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:26:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:26:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:26:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:15.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:26:15 compute-1 ceph-mon[80077]: pgmap v1325: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:26:15 compute-1 podman[250656]: 2025-12-07 10:26:15.617804734 +0000 UTC m=+0.116464251 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:26:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:16.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:16 compute-1 sshd-session[250682]: Connection closed by authenticating user root 161.35.84.99 port 41740 [preauth]
Dec 07 10:26:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:17.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:17 compute-1 ceph-mon[80077]: pgmap v1326: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:18.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:26:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:19.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:19 compute-1 ceph-mon[80077]: pgmap v1327: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.571255) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103179571294, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1933, "num_deletes": 251, "total_data_size": 4972669, "memory_usage": 5028672, "flush_reason": "Manual Compaction"}
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103179588336, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 3251062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37887, "largest_seqno": 39815, "table_properties": {"data_size": 3243077, "index_size": 4863, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16602, "raw_average_key_size": 20, "raw_value_size": 3227081, "raw_average_value_size": 3949, "num_data_blocks": 209, "num_entries": 817, "num_filter_entries": 817, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765103008, "oldest_key_time": 1765103008, "file_creation_time": 1765103179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 17109 microseconds, and 6771 cpu microseconds.
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.588368) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 3251062 bytes OK
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.588385) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.589790) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.589801) EVENT_LOG_v1 {"time_micros": 1765103179589797, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.589819) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 4964095, prev total WAL file size 4964095, number of live WAL files 2.
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.590970) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(3174KB)], [72(11MB)]
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103179591074, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 15315445, "oldest_snapshot_seqno": -1}
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6838 keys, 13129109 bytes, temperature: kUnknown
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103179681955, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 13129109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13086402, "index_size": 24483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 179568, "raw_average_key_size": 26, "raw_value_size": 12966099, "raw_average_value_size": 1896, "num_data_blocks": 958, "num_entries": 6838, "num_filter_entries": 6838, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765103179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.682201) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13129109 bytes
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.698010) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.4 rd, 144.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 11.5 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(8.7) write-amplify(4.0) OK, records in: 7358, records dropped: 520 output_compression: NoCompression
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.698054) EVENT_LOG_v1 {"time_micros": 1765103179698038, "job": 44, "event": "compaction_finished", "compaction_time_micros": 90948, "compaction_time_cpu_micros": 40559, "output_level": 6, "num_output_files": 1, "total_output_size": 13129109, "num_input_records": 7358, "num_output_records": 6838, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103179698974, "job": 44, "event": "table_file_deletion", "file_number": 74}
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103179701705, "job": 44, "event": "table_file_deletion", "file_number": 72}
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.590829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.701901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.701915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.701920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.701925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:26:19 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:26:19.701929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:26:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:20.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:21.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:21 compute-1 ceph-mon[80077]: pgmap v1328: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:26:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:22.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:22 compute-1 podman[250688]: 2025-12-07 10:26:22.578110493 +0000 UTC m=+0.077212312 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 07 10:26:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:23.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:23 compute-1 ceph-mon[80077]: pgmap v1329: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:24.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:26:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:25.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:25 compute-1 ceph-mon[80077]: pgmap v1330: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:26:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:26.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:26:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:27.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:26:27 compute-1 ceph-mon[80077]: pgmap v1331: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:26:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:28.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:28 compute-1 ceph-mon[80077]: pgmap v1332: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:29 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:26:29 compute-1 nova_compute[230488]: 2025-12-07 10:26:29.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:26:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:29.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:30.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:30 compute-1 sudo[250713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:26:30 compute-1 sudo[250713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:26:30 compute-1 sudo[250713]: pam_unix(sudo:session): session closed for user root
Dec 07 10:26:30 compute-1 podman[250737]: 2025-12-07 10:26:30.423420362 +0000 UTC m=+0.055252645 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 07 10:26:31 compute-1 nova_compute[230488]: 2025-12-07 10:26:31.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:26:31 compute-1 nova_compute[230488]: 2025-12-07 10:26:31.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:26:31 compute-1 nova_compute[230488]: 2025-12-07 10:26:31.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:26:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:31.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:31 compute-1 ceph-mon[80077]: pgmap v1333: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:26:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:32.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.269 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:26:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:33.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.299 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.301 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.301 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.302 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.303 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:26:33 compute-1 ceph-mon[80077]: pgmap v1334: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:33 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:26:33 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/575799076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.763 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.947 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.948 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5164MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.949 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:26:33 compute-1 nova_compute[230488]: 2025-12-07 10:26:33.949 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:26:34 compute-1 nova_compute[230488]: 2025-12-07 10:26:34.055 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:26:34 compute-1 nova_compute[230488]: 2025-12-07 10:26:34.055 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:26:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:34.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:34 compute-1 nova_compute[230488]: 2025-12-07 10:26:34.094 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:26:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:26:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/575799076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:26:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2898703659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:26:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:26:34 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1482773286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:26:34 compute-1 nova_compute[230488]: 2025-12-07 10:26:34.553 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:26:34 compute-1 nova_compute[230488]: 2025-12-07 10:26:34.562 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:26:34 compute-1 nova_compute[230488]: 2025-12-07 10:26:34.589 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:26:34 compute-1 nova_compute[230488]: 2025-12-07 10:26:34.593 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:26:34 compute-1 nova_compute[230488]: 2025-12-07 10:26:34.593 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:26:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:35.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:35 compute-1 sshd-session[250803]: Invalid user postgres from 104.248.193.130 port 49004
Dec 07 10:26:35 compute-1 sshd-session[250803]: Connection closed by invalid user postgres 104.248.193.130 port 49004 [preauth]
Dec 07 10:26:35 compute-1 ceph-mon[80077]: pgmap v1335: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:26:35 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1482773286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:26:35 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3072265931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:26:35 compute-1 sudo[250805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:26:35 compute-1 sudo[250805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:26:35 compute-1 sudo[250805]: pam_unix(sudo:session): session closed for user root
Dec 07 10:26:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:36.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:36 compute-1 sudo[250830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:26:36 compute-1 sudo[250830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:26:36 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1518036194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:26:36 compute-1 sudo[250830]: pam_unix(sudo:session): session closed for user root
Dec 07 10:26:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:37.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:37 compute-1 ceph-mon[80077]: pgmap v1336: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:26:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:26:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:26:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:26:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:26:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:26:37 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:26:37 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/239165462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:26:37 compute-1 nova_compute[230488]: 2025-12-07 10:26:37.594 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:26:37 compute-1 nova_compute[230488]: 2025-12-07 10:26:37.595 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:26:37 compute-1 nova_compute[230488]: 2025-12-07 10:26:37.595 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:26:37 compute-1 nova_compute[230488]: 2025-12-07 10:26:37.629 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:26:37 compute-1 nova_compute[230488]: 2025-12-07 10:26:37.630 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:26:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:38.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:38 compute-1 ceph-mon[80077]: pgmap v1337: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 07 10:26:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:26:38.663 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:26:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:26:38.663 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:26:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:26:38.663 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:26:39 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:26:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:39.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:40.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:40 compute-1 nova_compute[230488]: 2025-12-07 10:26:40.299 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:26:40 compute-1 ceph-mon[80077]: pgmap v1338: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 07 10:26:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:41.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:42.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:42 compute-1 nova_compute[230488]: 2025-12-07 10:26:42.288 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:26:42 compute-1 sudo[250890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:26:42 compute-1 sudo[250890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:26:42 compute-1 sudo[250890]: pam_unix(sudo:session): session closed for user root
Dec 07 10:26:42 compute-1 ceph-mon[80077]: pgmap v1339: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 07 10:26:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:26:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:26:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:26:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:43.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:44.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:44 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:26:44 compute-1 ceph-mon[80077]: pgmap v1340: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 07 10:26:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:45.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:45 compute-1 ceph-mon[80077]: pgmap v1341: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 07 10:26:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:46.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:46 compute-1 podman[250917]: 2025-12-07 10:26:46.61676975 +0000 UTC m=+0.119885104 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 07 10:26:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:47.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:47 compute-1 ceph-mon[80077]: pgmap v1342: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 07 10:26:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:48.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:26:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:49.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:49 compute-1 ceph-mon[80077]: pgmap v1343: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:26:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:50.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:50 compute-1 sudo[250946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:26:50 compute-1 sudo[250946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:26:50 compute-1 sudo[250946]: pam_unix(sudo:session): session closed for user root
Dec 07 10:26:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:51.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:51 compute-1 ceph-mon[80077]: pgmap v1344: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:52.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:53.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:53 compute-1 podman[250972]: 2025-12-07 10:26:53.570755959 +0000 UTC m=+0.075992679 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 07 10:26:53 compute-1 ceph-mon[80077]: pgmap v1345: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:54.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:26:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:55.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:56 compute-1 ceph-mon[80077]: pgmap v1346: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:26:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:56.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:26:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:57.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:26:58 compute-1 ceph-mon[80077]: pgmap v1347: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:26:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:26:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:26:58.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:26:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:26:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:26:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:26:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:26:59.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:00 compute-1 ceph-mon[80077]: pgmap v1348: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:27:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:00.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:00 compute-1 podman[250996]: 2025-12-07 10:27:00.582402267 +0000 UTC m=+0.066805800 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 07 10:27:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:01.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:02 compute-1 ceph-mon[80077]: pgmap v1349: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:02.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/1551884830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:27:03 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/1551884830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:27:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:03.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:04 compute-1 ceph-mon[80077]: pgmap v1350: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:04.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:04 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:27:04 compute-1 sshd-session[251017]: Connection closed by authenticating user root 161.35.84.99 port 60292 [preauth]
Dec 07 10:27:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:05.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:06 compute-1 ceph-mon[80077]: pgmap v1351: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:27:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:06.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:07.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:08.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:08 compute-1 ceph-mon[80077]: pgmap v1352: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:09 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:27:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:09.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:10.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:10 compute-1 ceph-mon[80077]: pgmap v1353: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:27:10 compute-1 sudo[251023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:27:10 compute-1 sudo[251023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:27:10 compute-1 sudo[251023]: pam_unix(sudo:session): session closed for user root
Dec 07 10:27:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:11.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:12.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:12 compute-1 ceph-mon[80077]: pgmap v1354: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:13 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:27:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:13.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:14.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:27:14 compute-1 ceph-mon[80077]: pgmap v1355: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:15.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:16.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:16 compute-1 ceph-mon[80077]: pgmap v1356: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:27:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:17.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:17 compute-1 podman[251051]: 2025-12-07 10:27:17.616361681 +0000 UTC m=+0.105897043 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 07 10:27:17 compute-1 ceph-mon[80077]: pgmap v1357: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:18.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:27:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:19.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:19 compute-1 ceph-mon[80077]: pgmap v1358: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:27:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:20.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:21.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:22.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:22 compute-1 ceph-mon[80077]: pgmap v1359: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:22 compute-1 sshd-session[251079]: Invalid user postgres from 104.248.193.130 port 59226
Dec 07 10:27:22 compute-1 sshd-session[251079]: Connection closed by invalid user postgres 104.248.193.130 port 59226 [preauth]
Dec 07 10:27:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:23.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:24.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:27:24 compute-1 ceph-mon[80077]: pgmap v1360: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:24 compute-1 podman[251083]: 2025-12-07 10:27:24.581519824 +0000 UTC m=+0.072829054 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 07 10:27:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:25.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:26.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:26 compute-1 ceph-mon[80077]: pgmap v1361: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:27:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:27.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:27:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:28.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:27:28 compute-1 ceph-mon[80077]: pgmap v1362: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:28 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:27:29 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:27:29 compute-1 nova_compute[230488]: 2025-12-07 10:27:29.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:27:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:29.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:30.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:30 compute-1 ceph-mon[80077]: pgmap v1363: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:27:30 compute-1 sudo[251106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:27:30 compute-1 sudo[251106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:27:30 compute-1 sudo[251106]: pam_unix(sudo:session): session closed for user root
Dec 07 10:27:30 compute-1 podman[251130]: 2025-12-07 10:27:30.758392172 +0000 UTC m=+0.078869218 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 07 10:27:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:31.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:32.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:32 compute-1 nova_compute[230488]: 2025-12-07 10:27:32.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:27:32 compute-1 ceph-mon[80077]: pgmap v1364: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:33 compute-1 nova_compute[230488]: 2025-12-07 10:27:33.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:27:33 compute-1 nova_compute[230488]: 2025-12-07 10:27:33.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:27:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:33.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:34.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:27:34 compute-1 nova_compute[230488]: 2025-12-07 10:27:34.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:27:34 compute-1 nova_compute[230488]: 2025-12-07 10:27:34.324 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:27:34 compute-1 nova_compute[230488]: 2025-12-07 10:27:34.325 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:27:34 compute-1 nova_compute[230488]: 2025-12-07 10:27:34.325 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:27:34 compute-1 nova_compute[230488]: 2025-12-07 10:27:34.325 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:27:34 compute-1 nova_compute[230488]: 2025-12-07 10:27:34.325 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:27:34 compute-1 ceph-mon[80077]: pgmap v1365: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:27:34 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1272498199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:27:34 compute-1 nova_compute[230488]: 2025-12-07 10:27:34.794 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:27:34 compute-1 nova_compute[230488]: 2025-12-07 10:27:34.941 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:27:34 compute-1 nova_compute[230488]: 2025-12-07 10:27:34.942 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5169MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:27:34 compute-1 nova_compute[230488]: 2025-12-07 10:27:34.943 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:27:34 compute-1 nova_compute[230488]: 2025-12-07 10:27:34.943 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:27:35 compute-1 nova_compute[230488]: 2025-12-07 10:27:35.029 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:27:35 compute-1 nova_compute[230488]: 2025-12-07 10:27:35.030 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:27:35 compute-1 nova_compute[230488]: 2025-12-07 10:27:35.258 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:27:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:35.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:35 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1272498199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:27:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:27:35 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2906475946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:27:35 compute-1 nova_compute[230488]: 2025-12-07 10:27:35.778 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:27:35 compute-1 nova_compute[230488]: 2025-12-07 10:27:35.785 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:27:35 compute-1 nova_compute[230488]: 2025-12-07 10:27:35.802 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:27:35 compute-1 nova_compute[230488]: 2025-12-07 10:27:35.803 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:27:35 compute-1 nova_compute[230488]: 2025-12-07 10:27:35.803 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:27:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:36.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:36 compute-1 ceph-mon[80077]: pgmap v1366: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:27:36 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2906475946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:27:36 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2677577692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:27:36 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1661855641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:27:36 compute-1 nova_compute[230488]: 2025-12-07 10:27:36.803 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:27:36 compute-1 nova_compute[230488]: 2025-12-07 10:27:36.804 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:27:37 compute-1 nova_compute[230488]: 2025-12-07 10:27:37.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:27:37 compute-1 nova_compute[230488]: 2025-12-07 10:27:37.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:27:37 compute-1 nova_compute[230488]: 2025-12-07 10:27:37.271 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:27:37 compute-1 nova_compute[230488]: 2025-12-07 10:27:37.299 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:27:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:37.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:37 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4023986630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:27:37 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1453650458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:27:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:38.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:38 compute-1 nova_compute[230488]: 2025-12-07 10:27:38.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:27:38 compute-1 ceph-mon[80077]: pgmap v1367: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:27:38.664 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:27:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:27:38.665 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:27:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:27:38.665 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:27:39 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:27:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:39.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:40.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:40 compute-1 ceph-mon[80077]: pgmap v1368: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:27:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:41.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:42.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:42 compute-1 nova_compute[230488]: 2025-12-07 10:27:42.265 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:27:42 compute-1 nova_compute[230488]: 2025-12-07 10:27:42.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:27:42 compute-1 sudo[251200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:27:42 compute-1 sudo[251200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:27:42 compute-1 ceph-mon[80077]: pgmap v1369: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:27:42 compute-1 sudo[251200]: pam_unix(sudo:session): session closed for user root
Dec 07 10:27:42 compute-1 sudo[251225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:27:42 compute-1 sudo[251225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:27:43 compute-1 nova_compute[230488]: 2025-12-07 10:27:43.284 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:27:43 compute-1 nova_compute[230488]: 2025-12-07 10:27:43.284 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 07 10:27:43 compute-1 nova_compute[230488]: 2025-12-07 10:27:43.309 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 07 10:27:43 compute-1 sudo[251225]: pam_unix(sudo:session): session closed for user root
Dec 07 10:27:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:27:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:43.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:27:44 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:27:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:44.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:44 compute-1 ceph-mon[80077]: pgmap v1370: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:45.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:46.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:46 compute-1 ceph-mon[80077]: pgmap v1371: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:27:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:47.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:27:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:27:47 compute-1 ceph-mon[80077]: pgmap v1372: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:27:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:27:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:27:47 compute-1 ceph-mon[80077]: pgmap v1373: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Dec 07 10:27:47 compute-1 ceph-mon[80077]: pgmap v1374: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 696 B/s rd, 0 op/s
Dec 07 10:27:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:27:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:27:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:27:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:27:47 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:27:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:48.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:48 compute-1 podman[251285]: 2025-12-07 10:27:48.78842837 +0000 UTC m=+0.191231025 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 07 10:27:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:27:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:49.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:27:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:50.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:27:50 compute-1 ceph-mon[80077]: pgmap v1375: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec 07 10:27:50 compute-1 sudo[251315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:27:50 compute-1 sudo[251315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:27:50 compute-1 sudo[251315]: pam_unix(sudo:session): session closed for user root
Dec 07 10:27:51 compute-1 sshd-session[251313]: Connection closed by authenticating user root 161.35.84.99 port 33152 [preauth]
Dec 07 10:27:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:51.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:52.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:53 compute-1 ceph-mon[80077]: pgmap v1376: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec 07 10:27:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:53.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:27:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:54.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:54 compute-1 ceph-mon[80077]: pgmap v1377: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec 07 10:27:55 compute-1 sudo[251342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:27:55 compute-1 sudo[251342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:27:55 compute-1 sudo[251342]: pam_unix(sudo:session): session closed for user root
Dec 07 10:27:55 compute-1 podman[251366]: 2025-12-07 10:27:55.131385527 +0000 UTC m=+0.080498712 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 07 10:27:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:55.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:27:55 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:27:55 compute-1 ceph-mon[80077]: pgmap v1378: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec 07 10:27:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:56.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:56 compute-1 nova_compute[230488]: 2025-12-07 10:27:56.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:27:56 compute-1 nova_compute[230488]: 2025-12-07 10:27:56.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 07 10:27:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:27:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:57.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:27:57 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:27:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:27:58.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:27:58 compute-1 ceph-mon[80077]: pgmap v1379: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 07 10:27:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:27:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:27:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:27:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:27:59.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:00.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:00 compute-1 ceph-mon[80077]: pgmap v1380: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:28:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:01.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:01 compute-1 podman[251391]: 2025-12-07 10:28:01.552574193 +0000 UTC m=+0.057353223 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 07 10:28:01 compute-1 ceph-mon[80077]: pgmap v1381: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:02.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:02 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/658643401' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 07 10:28:02 compute-1 ceph-mon[80077]: from='client.? 192.168.122.10:0/658643401' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 07 10:28:03 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:03 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:03 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:03.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:03 compute-1 sshd-session[251410]: Received disconnect from 101.36.224.146 port 35450:11:  [preauth]
Dec 07 10:28:03 compute-1 sshd-session[251410]: Disconnected from authenticating user root 101.36.224.146 port 35450 [preauth]
Dec 07 10:28:03 compute-1 ceph-mon[80077]: pgmap v1382: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:28:04 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:28:04 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:04 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:04 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:04.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:05 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:05 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:05 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:05.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:06 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:06 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:06 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:06.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:06 compute-1 ceph-mon[80077]: pgmap v1383: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:07 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:07 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:07 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:07.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:07 compute-1 ceph-mon[80077]: pgmap v1384: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:08 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:08 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:08 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:08.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:09 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:28:09 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:09 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:09 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:09.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:10 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:10 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:10 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:10.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:10 compute-1 sshd-session[251416]: Invalid user postgres from 104.248.193.130 port 49032
Dec 07 10:28:10 compute-1 sshd-session[251416]: Connection closed by invalid user postgres 104.248.193.130 port 49032 [preauth]
Dec 07 10:28:10 compute-1 ceph-mon[80077]: pgmap v1385: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:28:10 compute-1 sudo[251419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:28:10 compute-1 sudo[251419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:28:10 compute-1 sudo[251419]: pam_unix(sudo:session): session closed for user root
Dec 07 10:28:11 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:11 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:11 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:11.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:11 compute-1 ceph-mon[80077]: pgmap v1386: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:12 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:12 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:12 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:12.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:12 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:28:13 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:13 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:13 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:13.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:13 compute-1 ceph-mon[80077]: pgmap v1387: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:28:14 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:28:14 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:14 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:14 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:14.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:15 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:15 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:15 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:15.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:16 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:16 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:16 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:16.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:16 compute-1 ceph-mon[80077]: pgmap v1388: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:17 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:17 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:17 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:17.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:17 compute-1 ceph-mon[80077]: pgmap v1389: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:18 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:18 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:18 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:18.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.430045) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103298430123, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1438, "num_deletes": 257, "total_data_size": 3548670, "memory_usage": 3587840, "flush_reason": "Manual Compaction"}
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103298446055, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 2295859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39820, "largest_seqno": 41253, "table_properties": {"data_size": 2289816, "index_size": 3306, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12910, "raw_average_key_size": 19, "raw_value_size": 2277552, "raw_average_value_size": 3466, "num_data_blocks": 145, "num_entries": 657, "num_filter_entries": 657, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765103180, "oldest_key_time": 1765103180, "file_creation_time": 1765103298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 16077 microseconds, and 6259 cpu microseconds.
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.446127) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 2295859 bytes OK
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.446160) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.448021) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.448038) EVENT_LOG_v1 {"time_micros": 1765103298448033, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.448058) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3541995, prev total WAL file size 3541995, number of live WAL files 2.
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.449290) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2242KB)], [75(12MB)]
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103298449359, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 15424968, "oldest_snapshot_seqno": -1}
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6967 keys, 15289190 bytes, temperature: kUnknown
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103298573918, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 15289190, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15243374, "index_size": 27272, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 183215, "raw_average_key_size": 26, "raw_value_size": 15118538, "raw_average_value_size": 2170, "num_data_blocks": 1073, "num_entries": 6967, "num_filter_entries": 6967, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765100473, "oldest_key_time": 0, "file_creation_time": 1765103298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "19b7cd17-892b-4642-8771-311739802c4a", "db_session_id": "JE48258Z8V09XG5T5TD7", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.574156) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 15289190 bytes
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.575823) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.8 rd, 122.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 12.5 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(13.4) write-amplify(6.7) OK, records in: 7495, records dropped: 528 output_compression: NoCompression
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.575879) EVENT_LOG_v1 {"time_micros": 1765103298575854, "job": 46, "event": "compaction_finished", "compaction_time_micros": 124638, "compaction_time_cpu_micros": 61642, "output_level": 6, "num_output_files": 1, "total_output_size": 15289190, "num_input_records": 7495, "num_output_records": 6967, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103298577317, "job": 46, "event": "table_file_deletion", "file_number": 77}
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765103298582322, "job": 46, "event": "table_file_deletion", "file_number": 75}
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.449130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.582492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.582498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.582499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.582500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:28:18 compute-1 ceph-mon[80077]: rocksdb: (Original Log Time 2025/12/07-10:28:18.582502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 07 10:28:19 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:28:19 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:19 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:19 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:19.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:19 compute-1 podman[251448]: 2025-12-07 10:28:19.625031315 +0000 UTC m=+0.126012791 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 07 10:28:20 compute-1 ceph-mon[80077]: pgmap v1390: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:28:20 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:20 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:20 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:20.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:21 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:21 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:21 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:21.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:22 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:22 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:22 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:22.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:22 compute-1 ceph-mon[80077]: pgmap v1391: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:23 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:23 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:23 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:23.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:23 compute-1 ceph-mon[80077]: pgmap v1392: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:28:24 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:28:24 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:24 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:24 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:24.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:25 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:25 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:25 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:25.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:25 compute-1 podman[251477]: 2025-12-07 10:28:25.567572455 +0000 UTC m=+0.073085920 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 07 10:28:25 compute-1 ceph-mon[80077]: pgmap v1393: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:26 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:26 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:26 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:26.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:27 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:28:27 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:27 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:27 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:27.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:28 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:28 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:28 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:28.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:28 compute-1 ceph-mon[80077]: pgmap v1394: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:29 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:28:29 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:29 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:29 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:29.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:29 compute-1 ceph-mon[80077]: pgmap v1395: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:28:30 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:30 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:30 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:30.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:30 compute-1 sudo[251500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:28:30 compute-1 sudo[251500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:28:30 compute-1 sudo[251500]: pam_unix(sudo:session): session closed for user root
Dec 07 10:28:31 compute-1 nova_compute[230488]: 2025-12-07 10:28:31.290 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:28:31 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:31 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:31 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:31.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:31 compute-1 ceph-mon[80077]: pgmap v1396: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:32 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:32 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:32 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:32.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:32 compute-1 podman[251526]: 2025-12-07 10:28:32.585928877 +0000 UTC m=+0.077734147 container health_status 7b9187d913a8856f87cd0b020ea93ab31ac6b0770d0c6df95f23f58a26c0eed3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 07 10:28:33 compute-1 nova_compute[230488]: 2025-12-07 10:28:33.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:28:33 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:33 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:33 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:33.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:33 compute-1 sshd-session[251545]: Accepted publickey for zuul from 192.168.122.10 port 53474 ssh2: ECDSA SHA256:Ge4vEI3tJyOAEV3kv3WUGTzBV/uoL+dgWZHkPhbKL64
Dec 07 10:28:33 compute-1 systemd-logind[796]: New session 58 of user zuul.
Dec 07 10:28:33 compute-1 systemd[1]: Started Session 58 of User zuul.
Dec 07 10:28:33 compute-1 ceph-mon[80077]: pgmap v1397: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:28:33 compute-1 sshd-session[251545]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 07 10:28:33 compute-1 sudo[251549]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 07 10:28:33 compute-1 sudo[251549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 07 10:28:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:28:34 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:34 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:34 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:34.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.270 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.311 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.311 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.312 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.312 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.313 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:28:34 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:28:34 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4146308812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.739 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:28:34 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/4146308812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.895 230492 WARNING nova.virt.libvirt.driver [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.896 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5157MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.896 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:28:34 compute-1 nova_compute[230488]: 2025-12-07 10:28:34.897 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.194 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.194 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.291 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing inventories for resource provider 58b51610-0751-43d9-94a3-66540bffec81 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.309 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Updating ProviderTree inventory for provider 58b51610-0751-43d9-94a3-66540bffec81 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.310 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Updating inventory in ProviderTree for provider 58b51610-0751-43d9-94a3-66540bffec81 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.328 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing aggregate associations for resource provider 58b51610-0751-43d9-94a3-66540bffec81, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.366 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Refreshing trait associations for resource provider 58b51610-0751-43d9-94a3-66540bffec81, traits: HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AVX,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AMD_SVM,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.383 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 07 10:28:35 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:35 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:35 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:35.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:35 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 07 10:28:35 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2148812951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:28:35 compute-1 ceph-mon[80077]: pgmap v1398: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:35 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2148812951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.791 230492 DEBUG oslo_concurrency.processutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.796 230492 DEBUG nova.compute.provider_tree [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed in ProviderTree for provider: 58b51610-0751-43d9-94a3-66540bffec81 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.829 230492 DEBUG nova.scheduler.client.report [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Inventory has not changed for provider 58b51610-0751-43d9-94a3-66540bffec81 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.830 230492 DEBUG nova.compute.resource_tracker [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 07 10:28:35 compute-1 nova_compute[230488]: 2025-12-07 10:28:35.830 230492 DEBUG oslo_concurrency.lockutils [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:28:36 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:36 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:36 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:36.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:36 compute-1 nova_compute[230488]: 2025-12-07 10:28:36.830 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:28:36 compute-1 nova_compute[230488]: 2025-12-07 10:28:36.830 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 07 10:28:36 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2074318380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:28:36 compute-1 ceph-mon[80077]: from='client.27272 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:36 compute-1 ceph-mon[80077]: from='client.18414 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:37 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Dec 07 10:28:37 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1506127249' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 07 10:28:37 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:37 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:37 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:37.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:37 compute-1 ceph-mon[80077]: from='client.27511 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:37 compute-1 ceph-mon[80077]: from='client.27281 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:37 compute-1 ceph-mon[80077]: from='client.18423 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:37 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1298980407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:28:37 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1751109523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:28:37 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1506127249' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 07 10:28:37 compute-1 ceph-mon[80077]: from='client.27523 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:37 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2125063758' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 07 10:28:37 compute-1 ceph-mon[80077]: pgmap v1399: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:37 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3063365727' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 07 10:28:38 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:38 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:38 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:38.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:38 compute-1 sshd-session[251877]: Connection closed by authenticating user root 161.35.84.99 port 36218 [preauth]
Dec 07 10:28:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:28:38.665 142603 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 07 10:28:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:28:38.665 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 07 10:28:38 compute-1 ovn_metadata_agent[142598]: 2025-12-07 10:28:38.666 142603 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 07 10:28:38 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1033101545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 07 10:28:39 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:28:39 compute-1 nova_compute[230488]: 2025-12-07 10:28:39.269 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:28:39 compute-1 nova_compute[230488]: 2025-12-07 10:28:39.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 07 10:28:39 compute-1 nova_compute[230488]: 2025-12-07 10:28:39.270 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 07 10:28:39 compute-1 nova_compute[230488]: 2025-12-07 10:28:39.289 230492 DEBUG nova.compute.manager [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 07 10:28:39 compute-1 nova_compute[230488]: 2025-12-07 10:28:39.290 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:28:39 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:39 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:39 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:39.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:39 compute-1 ceph-mon[80077]: pgmap v1400: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:28:40 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:40 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:40 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:40.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:40 compute-1 ovs-vsctl[251928]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 07 10:28:41 compute-1 virtqemud[229835]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 07 10:28:41 compute-1 virtqemud[229835]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 07 10:28:41 compute-1 virtqemud[229835]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 07 10:28:41 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:41 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:41 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:41.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:41 compute-1 ceph-mon[80077]: pgmap v1401: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:42 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: cache status {prefix=cache status} (starting...)
Dec 07 10:28:42 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:28:42 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: client ls {prefix=client ls} (starting...)
Dec 07 10:28:42 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:28:42 compute-1 lvm[252272]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 07 10:28:42 compute-1 lvm[252272]: VG ceph_vg0 finished
Dec 07 10:28:42 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:42 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:42 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:42.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:42 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: damage ls {prefix=damage ls} (starting...)
Dec 07 10:28:42 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:28:42 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:28:42 compute-1 ceph-mon[80077]: from='client.18459 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:42 compute-1 ceph-mon[80077]: from='client.27314 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:42 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: dump loads {prefix=dump loads} (starting...)
Dec 07 10:28:42 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:28:42 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 07 10:28:42 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:28:43 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Dec 07 10:28:43 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2336472501' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 07 10:28:43 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:28:43 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 07 10:28:43 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:28:43 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 07 10:28:43 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2665132872' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 07 10:28:43 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:28:43 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:43 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:43 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:43.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:43 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 07 10:28:43 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:28:43 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Dec 07 10:28:43 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3163529591' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/928733236' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mon[80077]: from='client.18477 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2336472501' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mon[80077]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mon[80077]: from='client.27335 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2189176137' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mon[80077]: from='client.18489 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2665132872' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mon[80077]: from='client.27353 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mon[80077]: from='client.27535 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/83704881' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mon[80077]: pgmap v1402: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:28:43 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3163529591' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 07 10:28:43 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 07 10:28:43 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:28:44 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: ops {prefix=ops} (starting...)
Dec 07 10:28:44 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:28:44 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:28:44 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:44 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:44 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:44.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:44 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 07 10:28:44 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1096184592' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 07 10:28:44 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3144110570' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 07 10:28:44 compute-1 nova_compute[230488]: 2025-12-07 10:28:44.284 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:28:44 compute-1 nova_compute[230488]: 2025-12-07 10:28:44.285 230492 DEBUG oslo_service.periodic_task [None req-d0d4fc8a-2e2e-434b-9711-b9fcadfb180d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 07 10:28:44 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 07 10:28:44 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3639213672' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: session ls {prefix=session ls} (starting...)
Dec 07 10:28:44 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc Can't run that command on an inactive MDS!
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.18513 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.27380 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2663519296' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.27553 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3282347237' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/298639853' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2891083063' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1096184592' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3144110570' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.27568 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.27410 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3563804765' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/609356737' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.27428 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.27583 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3639213672' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:28:44 compute-1 ceph-mds[85822]: mds.cephfs.compute-1.ihigcc asok_command: status {prefix=status} (starting...)
Dec 07 10:28:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 07 10:28:45 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/454445805' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Dec 07 10:28:45 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2037713878' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 07 10:28:45 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1487144800' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:28:45 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:45 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:45 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:45.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 07 10:28:45 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3209999873' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.18570 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.27437 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/497187814' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3325833804' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/454445805' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4230684718' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3543051868' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2037713878' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2810979409' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1487144800' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.27607 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3209999873' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: pgmap v1403: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:45 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/218383073' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 07 10:28:45 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 07 10:28:45 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3036307131' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:28:46 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:46 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:46 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:46.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:46 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 07 10:28:46 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2055295776' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1966025728' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.27625 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3345442847' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3036307131' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.18636 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2409192841' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.27491 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2709281894' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1269760533' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3926413628' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/43668824' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2055295776' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/206540178' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:28:46 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 07 10:28:46 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2261683652' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 07 10:28:47 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1993197955' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 07 10:28:47 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1592335159' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 07 10:28:47 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4027330642' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 07 10:28:47 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:47 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:47 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:47.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/833726415' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2261683652' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/476463414' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1993197955' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1592335159' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.18690 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.27670 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1390567391' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.27527 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4236388532' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/4027330642' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.18717 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: pgmap v1404: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3535025420' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/396672336' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:28:47 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 07 10:28:47 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3801422216' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:28:48 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:48 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:48 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:13.455168+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:14.455322+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:15.455453+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:16.455644+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:17.455822+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:18.455958+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:19.456129+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:20.456280+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:21.456418+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:22.456646+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:23.456815+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:24.456970+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:25.457142+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:26.457285+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:27.457441+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:28.457601+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:29.457901+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:30.458109+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:31.458253+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:32.458444+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:33.458586+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:34.458772+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:35.458895+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:36.459030+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:37.459183+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:38.459337+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:39.459506+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:40.459636+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:41.459789+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15d800 session 0x5613fc3e65a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f9e95c00 session 0x5613f9068f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:42.459922+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:43.460130+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:44.460332+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:45.460517+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:46.460668+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:47.460805+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:48.460935+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:49.461122+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:50.461282+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:51.461461+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:52.461594+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986800 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 53.599273682s of 53.603805542s, submitted: 1
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:53.461769+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:54.461898+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:55.462011+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 3072000 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:56.462146+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:57.462335+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988444 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:58.462454+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:56:59.462779+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:00.462974+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:01.463115+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:02.463293+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987853 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:03.463457+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:04.463610+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:05.463837+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:06.463983+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.491450310s of 13.503188133s, submitted: 3
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:07.464119+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:08.464356+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:09.464566+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:10.464749+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:11.464893+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:12.465060+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:13.465209+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:14.465392+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:15.465564+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 3063808 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:16.465684+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:17.465832+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:18.465995+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:19.466163+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:20.466341+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:21.466496+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:22.466727+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:23.466865+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:24.467039+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:25.467159+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:26.467311+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:27.467468+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:28.467605+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:29.467843+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:30.468335+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:31.468499+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:32.468650+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:33.468792+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91000 session 0x5613faef7860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:34.468935+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:35.469093+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:36.469247+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 3055616 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:37.469373+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:38.469504+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:39.469761+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:40.469954+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:41.470184+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:42.470329+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987721 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:43.470519+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:44.470678+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.201156616s of 38.223014832s, submitted: 1
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:45.470823+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:46.471008+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 3047424 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:47.471179+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989365 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:48.471313+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:49.471524+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:50.471673+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:51.471811+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:52.471936+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989365 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:53.472076+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:54.472197+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:55.472329+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 3039232 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:56.472464+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:57.472652+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.066782951s of 13.264513969s, submitted: 3
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988642 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:58.472789+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:57:59.472951+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:00.473074+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:01.473207+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:02.473349+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:03.473495+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988642 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:04.473637+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:05.473765+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:06.474228+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:07.474514+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:08.474672+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988642 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:09.474925+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:10.475056+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:11.475228+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:12.475358+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:13.475493+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988642 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90000 session 0x5613fb411a40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f88ad400 session 0x5613fc48a780
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:14.475700+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:15.475845+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91800 session 0x5613fb4ef0e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:16.476050+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:17.476178+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 3031040 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:18.476301+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988642 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:19.476475+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:20.476639+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:21.476817+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:22.476993+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:23.477135+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988642 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:24.477275+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.754665375s of 26.779539108s, submitted: 1
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:25.477425+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:26.477578+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:27.477691+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:28.477807+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990418 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:29.477968+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 3022848 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:30.478095+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:31.478248+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:32.478443+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:33.478585+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990418 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:34.478759+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:35.478896+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.936524391s of 11.331603050s, submitted: 3
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:36.479036+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:37.479242+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:38.479423+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989827 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:39.479656+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:40.479827+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:41.479989+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:42.480124+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:43.480312+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989695 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:44.480563+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:45.480782+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:46.480929+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:47.481073+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:48.481210+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989563 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:49.481394+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:50.481595+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:51.481803+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91c00 session 0x5613faef6780
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15dc00 session 0x5613fc48ad20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:52.481948+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 3014656 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:53.482085+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989563 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:54.482262+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:55.482393+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15d800 session 0x5613f9cb6960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:56.482529+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:57.482702+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:58.482873+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989563 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:58:59.483038+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:00.483191+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:01.483388+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:02.483534+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f88ad400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.618913651s of 26.631462097s, submitted: 3
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:03.483653+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989695 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:04.483803+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:05.483970+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:06.484122+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 1957888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:07.484250+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:08.484384+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991339 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:09.484538+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:10.484756+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:11.484918+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:12.485065+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 3006464 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:13.485204+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 2998272 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991339 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:14.485427+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.062094688s of 12.079286575s, submitted: 3
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:15.485588+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:16.485681+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:17.485892+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:18.486058+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993772 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:19.486350+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:20.486544+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:21.486696+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 2990080 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:22.486812+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:23.486967+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992917 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:24.487139+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:25.487262+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:26.487418+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:27.487648+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:28.487765+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992917 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:29.487938+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:30.488078+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:31.488213+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:32.488352+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15d800 session 0x5613fa1fe3c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba0d860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:33.488491+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992917 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:34.488698+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:35.488860+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:36.488980+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:37.489132+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:38.489274+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 2981888 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992917 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:39.489447+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:40.489772+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:41.489981+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:42.490106+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.302938461s of 28.331325531s, submitted: 7
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:43.490263+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993049 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:44.490402+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:45.490580+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 2973696 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:46.490714+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:47.490904+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:48.491041+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996073 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:49.491182+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:50.491316+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:51.491457+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:52.491589+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:53.491684+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996994 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:54.491817+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:55.491933+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.060354233s of 12.191138268s, submitted: 5
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:56.492052+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:57.492177+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:58.492387+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2965504 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996403 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T09:59:59.492604+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:00.492799+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:01.492988+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:02.493187+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:03.493342+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996271 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:04.493494+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90400 session 0x5613fc49c3c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90000 session 0x5613fac70b40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:05.493650+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:06.493773+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:07.494099+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:08.494269+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996271 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread fragmentation_score=0.000032 took=0.000055s
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:09.494481+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:10.494743+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:11.494942+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:12.495136+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:13.495302+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 2957312 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996271 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 9124 writes, 36K keys, 9124 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9124 writes, 2040 syncs, 4.47 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 793 writes, 1304 keys, 793 commit groups, 1.0 writes per commit group, ingest: 0.44 MB, 0.00 MB/s
                                           Interval WAL: 793 writes, 362 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b189b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613f7b19350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:14.495486+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:15.495697+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.060838699s of 20.067579269s, submitted: 2
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:16.496732+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:17.496951+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:18.497131+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 2924544 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996403 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:19.497391+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 2916352 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:20.497605+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 2916352 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:21.497774+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b2000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:22.497954+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:23.498160+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 2908160 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996403 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:24.498317+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,1])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:25.498490+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:26.498719+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:27.498910+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.003846169s of 12.153634071s, submitted: 3
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:28.499037+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:29.499236+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:30.499371+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:31.499564+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:32.499738+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:33.499887+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:34.500026+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:35.500179+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:36.500294+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fc4b2000 session 0x5613fc49c960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91000 session 0x5613fa1fed20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:37.500446+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:38.500598+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:39.500792+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:40.500987+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:41.501132+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:42.501286+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:43.501434+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:44.501672+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:45.501811+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:46.501959+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.567741394s of 19.574918747s, submitted: 2
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:47.502148+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:48.502300+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994630 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:49.502548+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:50.502697+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:51.503120+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 1859584 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:52.503281+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:53.503476+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994630 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:54.503679+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:55.503852+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:56.503973+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:57.504113+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:58.504309+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994630 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:00:59.504488+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:00.504699+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.260245323s of 13.263541222s, submitted: 1
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:01.504852+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91c00 session 0x5613fb685860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15dc00 session 0x5613fb5c90e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:02.505023+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:03.505197+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:04.505400+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:05.505533+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:06.505705+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:07.505867+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:08.506014+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:09.506192+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:10.506349+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:11.506482+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:12.506638+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:13.506802+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994498 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:14.506975+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:15.507155+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 1851392 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:16.507325+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.454084396s of 16.458341599s, submitted: 1
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:17.507746+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [0,0,0,0,0,1,0,0,0,0,2])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:18.507898+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994630 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:19.508055+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:20.508219+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:21.508419+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:22.508599+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:23.508784+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994630 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:24.508919+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:25.509139+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 1843200 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:26.509319+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:27.509559+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:28.509893+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996010 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:29.510169+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:30.510343+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:31.510549+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91800 session 0x5613fc214b40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f88ad400 session 0x5613faef6960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:32.510678+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:33.510823+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996010 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:34.510962+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:35.511174+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:36.511392+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:37.511581+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:38.511767+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996010 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:39.512754+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:40.512886+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:41.514004+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:42.515383+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f88ad400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.819530487s of 25.665802002s, submitted: 3
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:43.515545+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996142 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:44.516708+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:45.517098+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:46.517961+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f9e95c00 session 0x5613f9069680
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:47.518177+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 1835008 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:48.518686+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997654 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86212608 unmapped: 1826816 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:49.518882+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86212608 unmapped: 1826816 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:50.519389+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:51.519814+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:52.520238+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:53.520426+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997654 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:54.520766+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:55.521226+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:56.521567+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.521481514s of 14.528515816s, submitted: 2
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 1818624 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:57.521959+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 1810432 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:58.522389+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997654 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 1810432 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:01:59.522730+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:00.522941+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:01.523120+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:02.523332+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:03.523534+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997063 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:04.523702+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:05.523922+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:06.524148+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 1802240 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:07.524373+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86245376 unmapped: 1794048 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:08.524574+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86245376 unmapped: 1794048 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997063 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.129097939s of 12.140315056s, submitted: 3
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:09.524748+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:10.525072+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15d800 session 0x5613fc49d860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe90000 session 0x5613faef7c20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:11.525219+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:12.525407+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:13.525729+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996340 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:14.525910+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:15.526248+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:16.526705+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:17.526949+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:18.527280+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996340 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:19.527613+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:20.527789+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:21.527993+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.481614113s of 12.492132187s, submitted: 3
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:22.528284+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:23.528532+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996472 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:24.528685+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:25.528890+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:26.529091+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86253568 unmapped: 1785856 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:27.529297+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:28.529490+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997984 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:29.529662+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:30.529794+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:31.529964+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:32.530116+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:33.530290+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:34.530531+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997984 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:35.530685+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:36.530878+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.158850670s of 15.166591644s, submitted: 2
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:37.531005+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:38.531156+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91000 session 0x5613fb495680
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91800 session 0x5613fc6c32c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:39.531390+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997852 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:40.531509+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:41.531651+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:42.531838+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:43.531971+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:44.532157+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997852 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:45.532308+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:46.532459+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:47.532727+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:48.532879+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:49.533087+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997852 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 1777664 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.706581116s of 12.710232735s, submitted: 1
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:50.533226+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 1687552 heap: 88039424 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:51.533377+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 1253376 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:52.533777+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 1253376 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:53.533944+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 1253376 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:54.534163+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999496 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 1253376 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:55.534308+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87834624 unmapped: 1253376 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:56.534494+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87842816 unmapped: 1245184 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:57.534648+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87842816 unmapped: 1245184 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:58.534820+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87842816 unmapped: 1245184 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:02:59.535052+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999496 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:00.535249+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:01.535406+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.950714111s of 12.075445175s, submitted: 382
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91c00 session 0x5613fc48ad20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:02.535573+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:03.535738+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:04.535899+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998905 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:05.536056+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:06.536235+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:07.536408+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87851008 unmapped: 1236992 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:08.536630+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:09.536813+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998773 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:10.537041+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:11.537243+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:12.537394+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.236394882s of 11.243492126s, submitted: 2
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:13.537573+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:14.537777+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998905 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87859200 unmapped: 1228800 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:15.537974+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87867392 unmapped: 1220608 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:16.538175+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87867392 unmapped: 1220608 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:17.538325+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87875584 unmapped: 1212416 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:18.538512+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87875584 unmapped: 1212416 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:19.538748+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998905 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87875584 unmapped: 1212416 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:20.538942+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87875584 unmapped: 1212416 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:21.539124+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87875584 unmapped: 1212416 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:22.539248+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87875584 unmapped: 1212416 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.111818314s of 10.114892006s, submitted: 1
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:23.539440+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:24.539613+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998314 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:25.539848+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:26.540031+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:27.540259+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:28.540435+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:29.540613+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998182 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:30.540767+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87883776 unmapped: 1204224 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:31.540905+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:32.541115+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:33.541296+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:34.541470+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998182 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:35.541736+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:36.542002+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:37.542211+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:38.542343+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:39.542537+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998182 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:40.542806+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc51b4a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fbe91800 session 0x5613f9f3ef00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:41.542987+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:42.543202+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:43.543379+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:44.543551+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998182 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:45.543742+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:46.543930+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87891968 unmapped: 1196032 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:47.544104+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87900160 unmapped: 1187840 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:48.544331+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87900160 unmapped: 1187840 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:49.544610+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998182 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87900160 unmapped: 1187840 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:50.544839+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87900160 unmapped: 1187840 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:51.545023+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.453969955s of 28.460792542s, submitted: 2
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87908352 unmapped: 1179648 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:52.545205+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87908352 unmapped: 1179648 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:53.545364+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87908352 unmapped: 1179648 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:54.545516+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999826 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87908352 unmapped: 1179648 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:55.545699+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:56.545900+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:57.546096+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:58.546306+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:03:59.546489+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001338 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:00.546687+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:01.546883+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:02.547056+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:03.547233+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87916544 unmapped: 1171456 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:04.547414+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.090477943s of 13.101955414s, submitted: 3
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:05.547673+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:06.547875+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:07.548030+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:08.548201+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:09.548419+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:10.548567+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:11.548759+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:12.548971+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:13.549172+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:14.549345+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:15.549732+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:16.549966+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:17.550189+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:18.550491+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:19.550829+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:20.551043+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:21.551266+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:22.551556+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 1163264 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:23.551726+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:24.551982+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:25.552222+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:26.552393+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:27.552593+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:28.552717+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:29.552898+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 07 10:28:48 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2021840992' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:30.553158+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:31.553351+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:32.553526+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:33.553699+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:34.553870+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:35.554044+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:36.554200+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:37.554365+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:38.554525+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:39.554676+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:40.554907+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:41.555062+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:42.555231+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:43.555399+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:44.555580+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:45.555744+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:46.556509+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:47.556822+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:48.557064+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:49.558034+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:50.558251+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:51.558460+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:52.558690+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:53.558856+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:54.559056+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:55.559256+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:56.559452+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:57.559603+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:58.559808+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:04:59.560045+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:00.560250+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:01.560437+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:02.560704+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:03.560868+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:04.561117+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:05.561334+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:06.561579+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:07.561764+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:08.561975+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:09.562235+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:10.562479+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:11.562714+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 1155072 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:12.562919+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:13.563091+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:14.563362+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:15.563570+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:16.563820+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:17.564081+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:18.564332+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:19.564610+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:20.565003+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:21.565188+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:22.565354+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:23.565484+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:24.565657+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:25.565846+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:26.566051+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:27.566189+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:28.566340+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:29.566527+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:30.566719+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:31.566872+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:32.567056+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:33.567202+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:34.567309+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:35.567379+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:36.567533+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:37.567691+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:38.567808+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:39.568015+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:40.568179+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:41.568339+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:42.568508+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:43.568717+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:44.568926+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:45.569075+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:46.569212+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:47.569345+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:48.569525+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:49.569757+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:50.569959+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:51.570133+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:52.570267+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:53.570416+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:54.570558+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:55.570768+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:56.570966+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:57.571207+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:58.571397+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:05:59.571550+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001206 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:00.571707+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:01.571892+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613fb15dc00 session 0x5613fa1fe5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 ms_handle_reset con 0x5613f88ad400 session 0x5613fa1fe3c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:02.572072+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _renew_subs
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 118.321998596s of 118.326072693s, submitted: 1
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:03.572229+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1146880 heap: 89088000 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 149 heartbeat osd_stat(store_statfs(0x4fc5da000/0x0/0x4ffc00000, data 0x1750f9/0x232000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:04.572357+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 18882560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _renew_subs
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 151 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc756b40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1070180 data_alloc: 218103808 data_used: 253952
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:05.572529+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 18882560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:06.572706+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87932928 unmapped: 18989056 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 152 ms_handle_reset con 0x5613fb15dc00 session 0x5613fc756960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:07.572869+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:08.572964+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:09.573146+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0x97d589/0xa41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074835 data_alloc: 218103808 data_used: 262144
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:10.573316+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:11.573503+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 152 heartbeat osd_stat(store_statfs(0x4fbdca000/0x0/0x4ffc00000, data 0x97d589/0xa41000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:12.573637+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 ms_handle_reset con 0x5613fbe91000 session 0x5613fb686780
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 ms_handle_reset con 0x5613fbe90000 session 0x5613fc2abc20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:13.573774+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:14.573902+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077725 data_alloc: 218103808 data_used: 262144
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:15.574060+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc7000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.536023140s of 12.729516029s, submitted: 74
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:16.574218+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:17.574863+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:18.575008+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:19.575236+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079909 data_alloc: 218103808 data_used: 262144
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:20.575406+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:21.575653+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:22.575847+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:23.575966+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b2400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:24.576153+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080041 data_alloc: 218103808 data_used: 262144
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:25.576318+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:26.576556+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:27.576758+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.903751373s of 11.915815353s, submitted: 3
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:28.576884+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:29.577122+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b2800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079318 data_alloc: 218103808 data_used: 262144
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:30.577330+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:31.577483+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:32.577689+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 18980864 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:33.577778+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:34.577937+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079318 data_alloc: 218103808 data_used: 262144
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:35.578113+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:36.578240+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:37.578361+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:38.578554+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:39.578845+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1078004 data_alloc: 218103808 data_used: 262144
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:40.578970+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:41.579174+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 87924736 unmapped: 18997248 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b2c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 ms_handle_reset con 0x5613fc4b2c00 session 0x5613fc7572c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc7574a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:42.579375+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 95715328 unmapped: 11206656 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:43.579535+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 11190272 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:44.579716+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.819295883s of 16.843923569s, submitted: 6
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _renew_subs
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 154 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 11141120 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 154 heartbeat osd_stat(store_statfs(0x4fbdc8000/0x0/0x4ffc00000, data 0x97f55b/0xa44000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103250 data_alloc: 218103808 data_used: 7086080
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:45.579911+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _renew_subs
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 11141120 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fb15dc00 session 0x5613fc757860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fbe90000 session 0x5613fc756780
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fbe91000 session 0x5613f88c0f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fc4b3000 session 0x5613fc3e65a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb2cbe00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb684000/0x0/0x4ffc00000, data 0x10be7fc/0x1187000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:46.580054+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 10690560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb684000/0x0/0x4ffc00000, data 0x10be7fc/0x1187000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:47.580302+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 10690560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fb15dc00 session 0x5613fc73cf00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:48.580466+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 10690560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:49.580647+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 10690560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fbe90000 session 0x5613f95592c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb684000/0x0/0x4ffc00000, data 0x10be7fc/0x1187000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165908 data_alloc: 218103808 data_used: 7086080
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:50.580798+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fbe91000 session 0x5613fc49da40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96231424 unmapped: 10690560 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 ms_handle_reset con 0x5613fc4b3000 session 0x5613fba0e000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 heartbeat osd_stat(store_statfs(0x4fb684000/0x0/0x4ffc00000, data 0x10be7fc/0x1187000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:51.580904+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15dc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 10641408 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:52.581102+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 96280576 unmapped: 10641408 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _renew_subs
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:53.581369+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 99614720 unmapped: 7307264 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:54.581545+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217838 data_alloc: 234881024 data_used: 14565376
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:55.581703+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x10c07ce/0x118a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:56.581868+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:57.582003+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:58.582152+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:06:59.582386+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217838 data_alloc: 234881024 data_used: 14565376
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:00.582565+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x10c07ce/0x118a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:01.582724+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:02.582926+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:03.583133+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 103194624 unmapped: 3727360 heap: 106921984 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.420452118s of 19.636718750s, submitted: 77
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb681000/0x0/0x4ffc00000, data 0x10c07ce/0x118a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:04.583281+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110600192 unmapped: 1564672 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293890 data_alloc: 234881024 data_used: 15286272
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:05.583448+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110706688 unmapped: 1458176 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:06.583677+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108683264 unmapped: 3481600 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:07.583867+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108683264 unmapped: 3481600 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:08.584033+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108683264 unmapped: 3481600 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:09.584246+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 3448832 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ca3000/0x0/0x4ffc00000, data 0x18ff7ce/0x19c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297322 data_alloc: 234881024 data_used: 15429632
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:10.584419+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 3448832 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:11.584668+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 3448832 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:12.584842+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 3407872 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:13.584996+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108756992 unmapped: 3407872 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ca3000/0x0/0x4ffc00000, data 0x18ff7ce/0x19c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:14.585168+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 3391488 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298250 data_alloc: 234881024 data_used: 15499264
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:15.585332+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 3391488 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:16.585731+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:17.585893+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 3391488 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:18.586550+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 3391488 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:19.586822+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 3391488 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ca3000/0x0/0x4ffc00000, data 0x18ff7ce/0x19c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298250 data_alloc: 234881024 data_used: 15499264
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:20.586964+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108806144 unmapped: 3358720 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:21.587138+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 108806144 unmapped: 3358720 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9ca3000/0x0/0x4ffc00000, data 0x18ff7ce/0x19c9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fc2aad20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:22.587274+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fc2ab0e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91800 session 0x5613fba0e1e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110043136 unmapped: 2121728 heap: 112164864 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91000 session 0x5613fc3e6f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:23.587608+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.168640137s of 19.375652313s, submitted: 87
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 10330112 heap: 126861312 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3400 session 0x5613fb5c92c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fc51ad20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fb2ca5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91000 session 0x5613fb4eeb40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91800 session 0x5613fc011c20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:24.587805+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 19750912 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:25.588044+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405947 data_alloc: 234881024 data_used: 16031744
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 19750912 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:26.588268+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 19750912 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:27.588424+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 19750912 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e76000/0x0/0x4ffc00000, data 0x272b830/0x27f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:28.588580+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 19718144 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:29.588830+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 19718144 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3800 session 0x5613fc49d860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:30.589021+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406308 data_alloc: 234881024 data_used: 16031744
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110837760 unmapped: 19701760 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:31.589174+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 110845952 unmapped: 19693568 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:32.589345+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e76000/0x0/0x4ffc00000, data 0x272b830/0x27f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [1])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116301824 unmapped: 14237696 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:33.589507+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 5464064 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:34.589696+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 5464064 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:35.589872+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1506132 data_alloc: 251658240 data_used: 30793728
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125075456 unmapped: 5464064 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:36.590032+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.878297806s of 13.041009903s, submitted: 42
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e76000/0x0/0x4ffc00000, data 0x272b830/0x27f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 5390336 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:37.590198+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 5390336 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:38.590387+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 5390336 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:39.590594+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 5390336 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:40.590827+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1507948 data_alloc: 251658240 data_used: 30793728
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125149184 unmapped: 5390336 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e74000/0x0/0x4ffc00000, data 0x272c830/0x27f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:41.591019+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 5357568 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:42.591158+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 5357568 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:43.591269+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 5357568 heap: 130539520 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8e74000/0x0/0x4ffc00000, data 0x272c830/0x27f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [1])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:44.591511+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 1081344 heap: 133685248 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:45.591706+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1622320 data_alloc: 251658240 data_used: 31846400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 1597440 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:46.591826+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8199000/0x0/0x4ffc00000, data 0x3408830/0x34d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 1597440 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:47.591991+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.790105820s of 11.055186272s, submitted: 128
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 1597440 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:48.592220+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 133144576 unmapped: 1589248 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:49.592471+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 133144576 unmapped: 1589248 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:50.592679+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1615332 data_alloc: 251658240 data_used: 32133120
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 3432448 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f818d000/0x0/0x4ffc00000, data 0x3414830/0x34df000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:51.592883+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 3432448 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:52.593056+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 3432448 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:53.593257+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 3432448 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:54.593501+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 131301376 unmapped: 3432448 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613f9f3d860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fc6c2f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:55.593704+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3c00 session 0x5613fc49c5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315218 data_alloc: 234881024 data_used: 16031744
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 15499264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f98a1000/0x0/0x4ffc00000, data 0x19007ce/0x19ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:56.593840+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 15499264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:57.593996+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f98a1000/0x0/0x4ffc00000, data 0x19007ce/0x19ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 15499264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:58.594139+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 15499264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:07:59.594341+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 15499264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:00.594481+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f98a1000/0x0/0x4ffc00000, data 0x19007ce/0x19ca000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315218 data_alloc: 234881024 data_used: 16031744
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.011151314s of 13.162199974s, submitted: 50
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba0e780
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119234560 unmapped: 15499264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15dc00 session 0x5613f9f3e780
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:01.594709+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb688780
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:02.594848+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:03.594979+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:04.595132+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:05.595269+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137852 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:06.595423+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:07.595740+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:08.595933+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:09.596131+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:10.596299+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137852 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:11.596469+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:12.596602+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:13.596796+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:14.596958+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:15.597103+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137852 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:16.597243+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:17.597375+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:18.597490+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:19.597700+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:20.597903+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137852 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fac1d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:21.598089+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:22.598267+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:23.598405+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:24.598570+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 21643264 heap: 134733824 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:25.598756+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137852 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.815328598s of 24.952289581s, submitted: 44
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fc6c34a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 37666816 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613f9cb6b40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3c00 session 0x5613fb4941e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc107400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc107400 session 0x5613fc42d0e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb2cad20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:26.598952+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9dae000/0x0/0x4ffc00000, data 0x17f576c/0x18be000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 37666816 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b2800 session 0x5613fa24d680
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b2400 session 0x5613fb410d20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:27.599122+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 37666816 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:28.599303+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 37666816 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:29.599506+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112943104 unmapped: 37666816 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fac71860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:30.599696+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244374 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113246208 unmapped: 37363712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x181976c/0x18e2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:31.599830+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x181976c/0x18e2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 37486592 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:32.599979+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:33.600194+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:34.600356+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:35.600526+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339374 data_alloc: 234881024 data_used: 21630976
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:36.600669+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:37.600890+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x181976c/0x18e2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.176795006s of 12.298893929s, submitted: 14
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:38.601031+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:39.601170+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x181976c/0x18e2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:40.601301+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339506 data_alloc: 234881024 data_used: 21630976
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x181976c/0x18e2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:41.601528+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 30695424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:42.601673+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 30711808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:43.601816+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 28336128 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:44.601973+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f970e000/0x0/0x4ffc00000, data 0x1e9576c/0x1f5e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 27901952 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:45.602140+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1410990 data_alloc: 234881024 data_used: 22097920
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 27901952 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:46.602322+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96dc000/0x0/0x4ffc00000, data 0x1ec676c/0x1f8f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 27901952 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:47.602451+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 27901952 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:48.602601+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122707968 unmapped: 27901952 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:49.602838+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.586152077s of 11.780480385s, submitted: 70
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:50.603013+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407087 data_alloc: 234881024 data_used: 22097920
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:51.603276+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:52.603430+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:53.603559+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:54.603720+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:55.603910+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406955 data_alloc: 234881024 data_used: 22097920
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:56.604113+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:57.604293+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:58.604475+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:08:59.604691+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:00.605147+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406955 data_alloc: 234881024 data_used: 22097920
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:01.605310+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:02.605466+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91c00 session 0x5613f9069860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613fc73c5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:03.605673+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:04.605874+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121733120 unmapped: 28876800 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:05.606032+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1406955 data_alloc: 234881024 data_used: 22097920
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fb4103c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3c00 session 0x5613fba123c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121716736 unmapped: 28893184 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.448675156s of 16.463441849s, submitted: 4
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:06.606172+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc73cf00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f96da000/0x0/0x4ffc00000, data 0x1ec976c/0x1f92000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:07.606303+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:08.606474+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:09.606651+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:10.606813+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152945 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:11.606941+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:12.607099+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:13.607255+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:14.607411+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:15.607675+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153077 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:16.607864+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b2400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.148706436s of 10.198718071s, submitted: 22
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:17.607999+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:18.620009+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112697344 unmapped: 37912576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:19.620214+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b2800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:20.620459+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154589 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:21.620761+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:22.620976+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:23.621156+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:24.621333+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:25.621467+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153407 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:26.621575+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 38518784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:27.621728+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.545178413s of 11.556472778s, submitted: 3
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 38510592 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:28.621866+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 38510592 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:29.622038+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 38510592 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:30.622215+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153275 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 38510592 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:31.622344+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32c000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32c000 session 0x5613fc3e6000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc3e7680
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613fc2aba40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fc2aa5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [1])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3c00 session 0x5613fc2ab4a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc107c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc107c00 session 0x5613fb4114a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc2aa960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38469632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613f95592c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fc48b680
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:32.622480+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38469632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:33.622679+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38469632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:34.622822+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38469632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:35.623003+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa107000/0x0/0x4ffc00000, data 0x108b77c/0x1155000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206494 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38469632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:36.623141+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38469632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:37.623286+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3c00 session 0x5613fb410000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa107000/0x0/0x4ffc00000, data 0x108b77c/0x1155000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32dc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 38453248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:38.623461+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 112345088 unmapped: 38264832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:39.623682+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa106000/0x0/0x4ffc00000, data 0x108b79f/0x1156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:40.623968+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259412 data_alloc: 234881024 data_used: 14983168
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa106000/0x0/0x4ffc00000, data 0x108b79f/0x1156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:41.624385+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:42.624558+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa106000/0x0/0x4ffc00000, data 0x108b79f/0x1156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:43.624930+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:44.625254+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa106000/0x0/0x4ffc00000, data 0x108b79f/0x1156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:45.625548+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259412 data_alloc: 234881024 data_used: 14983168
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:46.625742+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:47.626312+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:48.626588+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 113188864 unmapped: 37421056 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:49.626894+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.326250076s of 21.455661774s, submitted: 29
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 35045376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:50.627165+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293924 data_alloc: 234881024 data_used: 15011840
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9df0000/0x0/0x4ffc00000, data 0x139379f/0x145e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 33480704 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:51.627377+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 33480704 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:52.627537+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 33480704 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:53.627683+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33382400 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:54.627816+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b2400 session 0x5613fc3e65a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fb4945a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33382400 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:55.628026+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293924 data_alloc: 234881024 data_used: 15011840
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:56.628262+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9df0000/0x0/0x4ffc00000, data 0x139379f/0x145e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:57.628464+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:58.628704+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9dfb000/0x0/0x4ffc00000, data 0x139679f/0x1461000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:09:59.628936+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:00.629114+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288748 data_alloc: 234881024 data_used: 15011840
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:01.629256+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 33366016 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:02.629397+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.119906425s of 13.254473686s, submitted: 55
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 33357824 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:03.629562+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 33357824 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:04.629715+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9dfa000/0x0/0x4ffc00000, data 0x139779f/0x1462000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 33357824 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:05.629882+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288972 data_alloc: 234881024 data_used: 15011840
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 33357824 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:06.630003+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 33357824 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:07.630157+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9dfa000/0x0/0x4ffc00000, data 0x139779f/0x1462000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613f9162960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613f91634a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613f9163c20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc4b3c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b3c00 session 0x5613fc7563c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc757e00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fc7561e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613fc757680
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fb2ca5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 33955840 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb0c7400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb0c7400 session 0x5613fb2ca000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:08.630309+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 33955840 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:09.630505+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 33955840 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:10.630700+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313641 data_alloc: 234881024 data_used: 15011840
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 33955840 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:11.630866+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b66000/0x0/0x4ffc00000, data 0x1629810/0x16f6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 33955840 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:12.631055+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 33947648 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:13.631357+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.039366722s of 11.124565125s, submitted: 25
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 2845 syncs, 3.89 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1953 writes, 6016 keys, 1953 commit groups, 1.0 writes per commit group, ingest: 5.65 MB, 0.01 MB/s
                                           Interval WAL: 1953 writes, 805 syncs, 2.43 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba123c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 33947648 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:14.631548+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b65000/0x0/0x4ffc00000, data 0x1629833/0x16f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 33947648 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:15.631814+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323446 data_alloc: 234881024 data_used: 16302080
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118022144 unmapped: 32587776 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:16.631956+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:17.632192+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:18.632368+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:19.632538+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b65000/0x0/0x4ffc00000, data 0x1629833/0x16f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:20.632738+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333174 data_alloc: 234881024 data_used: 17711104
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:21.632888+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:22.633005+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b64000/0x0/0x4ffc00000, data 0x1629833/0x16f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:23.633219+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:24.633562+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b64000/0x0/0x4ffc00000, data 0x1629833/0x16f7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:25.633744+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333298 data_alloc: 234881024 data_used: 17715200
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 32555008 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.484905243s of 12.509075165s, submitted: 7
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:26.633881+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119447552 unmapped: 31162368 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:27.634066+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119455744 unmapped: 31154176 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:28.634227+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119488512 unmapped: 31121408 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:29.634458+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119488512 unmapped: 31121408 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:30.634641+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1991833/0x1a5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360756 data_alloc: 234881024 data_used: 17846272
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119496704 unmapped: 31113216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:31.634815+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119496704 unmapped: 31113216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:32.634984+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1991833/0x1a5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119496704 unmapped: 31113216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:33.635180+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119496704 unmapped: 31113216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:34.635316+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 119496704 unmapped: 31113216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:35.635480+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1991833/0x1a5f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fba13c20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613fb2cbe00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360756 data_alloc: 234881024 data_used: 17846272
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 32137216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.854346275s of 10.019863129s, submitted: 62
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:36.635695+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fc73d2c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 32038912 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:37.635959+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 32038912 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:38.636179+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 32038912 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:39.636440+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 32038912 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:40.636776+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fb410d20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32dc00 session 0x5613fc3e61e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293708 data_alloc: 234881024 data_used: 14999552
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9df8000/0x0/0x4ffc00000, data 0x139779f/0x1462000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 32030720 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:41.636958+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb4ee5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 35586048 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:42.637122+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 35586048 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:43.637352+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115023872 unmapped: 35586048 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:44.637550+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:45.637741+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169636 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:46.637858+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:47.638034+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:48.638313+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:49.638605+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:50.638956+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169636 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:51.639141+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115032064 unmapped: 35577856 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:52.639375+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 35569664 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:53.639575+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 35569664 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:54.639777+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 35569664 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:55.640063+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169636 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 35569664 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:56.640301+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 35569664 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:57.640545+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 35569664 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:58.640773+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:10:59.641012+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:00.641218+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169636 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:01.641382+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:02.641537+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:03.641699+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:04.641869+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:05.642051+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169636 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:06.642192+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.730876923s of 30.943260193s, submitted: 83
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:07.642335+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fc7563c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 35553280 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fb4114a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb15d800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb15d800 session 0x5613fc2ab4a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc0103c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fb689e00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x110576c/0x11ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:08.642502+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 35602432 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:09.642701+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 35602432 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x110576c/0x11ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:10.642903+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 35602432 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223983 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:11.643119+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 35602432 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:12.643265+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 35602432 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fc49c5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32dc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x110576c/0x11ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:13.643418+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 35602432 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:14.643592+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 35561472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:15.643725+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x110576c/0x11ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277163 data_alloc: 234881024 data_used: 14954496
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:16.643981+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:17.644174+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:18.644327+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:19.644666+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:20.644886+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x110576c/0x11ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277163 data_alloc: 234881024 data_used: 14954496
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:21.645071+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08e000/0x0/0x4ffc00000, data 0x110576c/0x11ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:22.645311+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:23.645557+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:24.645787+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 34299904 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.286369324s of 17.366155624s, submitted: 16
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:25.645999+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 31776768 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322133 data_alloc: 234881024 data_used: 14954496
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:26.646209+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b6e000/0x0/0x4ffc00000, data 0x162576c/0x16ee000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:27.646462+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:28.646737+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:29.647007+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:30.647192+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322133 data_alloc: 234881024 data_used: 14954496
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:31.647412+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b62000/0x0/0x4ffc00000, data 0x163176c/0x16fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:32.647606+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:33.647809+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b62000/0x0/0x4ffc00000, data 0x163176c/0x16fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:34.648032+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:35.648188+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b62000/0x0/0x4ffc00000, data 0x163176c/0x16fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322133 data_alloc: 234881024 data_used: 14954496
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:36.648378+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:37.648603+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:38.648989+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 31735808 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:39.649206+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:40.649396+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322133 data_alloc: 234881024 data_used: 14954496
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:41.649561+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b62000/0x0/0x4ffc00000, data 0x163176c/0x16fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:42.649757+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:43.650013+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:44.650266+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b62000/0x0/0x4ffc00000, data 0x163176c/0x16fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:45.650571+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322133 data_alloc: 234881024 data_used: 14954496
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:46.650709+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:47.650828+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:48.651023+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:49.651242+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b62000/0x0/0x4ffc00000, data 0x163176c/0x16fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:50.651456+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 31727616 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322133 data_alloc: 234881024 data_used: 14954496
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:51.651709+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 31719424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:52.651858+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 31719424 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb4a3400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb4a3400 session 0x5613fba0d860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc101800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc101800 session 0x5613faef61e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613faef7860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fc6c2b40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.623338699s of 28.730890274s, submitted: 32
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:53.651974+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fc6c34a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb4a3400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118685696 unmapped: 31924224 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb4a3400 session 0x5613fa1fe960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4bc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4bc00 session 0x5613fb686f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb686b40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4bc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4bc00 session 0x5613fb4ee1e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:54.652116+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 31866880 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:55.652267+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f911f000/0x0/0x4ffc00000, data 0x20737ce/0x213d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 31850496 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:56.652405+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1404172 data_alloc: 234881024 data_used: 14954496
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 31850496 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:57.652584+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 31850496 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fb4ef2c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:58.652770+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 31850496 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fba0fa40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:11:59.652992+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 31850496 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb4a3400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb4a3400 session 0x5613fb410f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613f90683c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:00.653234+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118284288 unmapped: 32325632 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f911d000/0x0/0x4ffc00000, data 0x20747f1/0x213f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4bc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:01.653437+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1405934 data_alloc: 234881024 data_used: 14954496
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118300672 unmapped: 32309248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:02.653681+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118448128 unmapped: 32161792 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f911d000/0x0/0x4ffc00000, data 0x20747f1/0x213f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:03.653870+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 25501696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:04.654067+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 25501696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:05.654292+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 25501696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:06.654430+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1477374 data_alloc: 234881024 data_used: 25419776
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 25501696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f911d000/0x0/0x4ffc00000, data 0x20747f1/0x213f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:07.654601+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 25468928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:08.654807+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 25468928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:09.655046+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125140992 unmapped: 25468928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:10.655254+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f911d000/0x0/0x4ffc00000, data 0x20747f1/0x213f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 25436160 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.346025467s of 17.504379272s, submitted: 45
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:11.655427+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1477374 data_alloc: 234881024 data_used: 25419776
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 25436160 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:12.655709+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 25436160 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:13.655869+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 19873792 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:14.656047+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129826816 unmapped: 20783104 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:15.656202+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130203648 unmapped: 20406272 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:16.656366+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572954 data_alloc: 234881024 data_used: 25776128
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x2c4a7f1/0x2d15000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 20373504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:17.656519+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 20373504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:18.656785+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 20373504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:19.656997+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 20373504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:20.657165+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:21.657336+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568026 data_alloc: 234881024 data_used: 25780224
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8526000/0x0/0x4ffc00000, data 0x2c6b7f1/0x2d36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:22.657456+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:23.657586+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:24.657752+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:25.657884+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:26.658043+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8526000/0x0/0x4ffc00000, data 0x2c6b7f1/0x2d36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1568026 data_alloc: 234881024 data_used: 25780224
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129490944 unmapped: 21118976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.940172195s of 16.268814087s, submitted: 140
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4bc00 session 0x5613fc3e6960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fc2aab40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:27.658264+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 27672576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fba12f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f8526000/0x0/0x4ffc00000, data 0x2c6b7f1/0x2d36000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:28.658454+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc4b2800 session 0x5613fc756960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fc3e7e00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:29.658718+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:30.658911+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90ac000/0x0/0x4ffc00000, data 0x163276c/0x16fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:31.659109+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335344 data_alloc: 234881024 data_used: 14954496
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:32.659340+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:33.659585+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:34.659770+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90ac000/0x0/0x4ffc00000, data 0x163276c/0x16fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:35.659980+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32dc00 session 0x5613fba0d4a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fba13680
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 27648000 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc73d680
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:36.660164+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192764 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:37.660316+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:38.660452+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4bc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.920195580s of 12.143515587s, submitted: 86
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:39.660715+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:40.660951+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:41.661161+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192896 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:42.661386+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:43.661734+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:44.661937+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:45.662133+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:46.662404+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192896 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:47.662604+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:48.662765+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:49.662955+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 32997376 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.029423714s of 11.034767151s, submitted: 1
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:50.663126+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 32808960 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:51.663272+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192968 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 32669696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:52.663501+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 32669696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:53.663679+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:54.663881+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:55.664042+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:56.664198+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192764 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:57.664391+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:58.664556+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:12:59.664831+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fb2cb860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613faef6960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fc49d680
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fc6c2960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fba0cd20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32dc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32dc00 session 0x5613fc5ec000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc02a1e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fc02a3c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613f9069e00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:00.665010+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 32661504 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:01.665189+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa78d000/0x0/0x4ffc00000, data 0xa0577a/0xacf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200150 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 32653312 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:02.665371+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 32653312 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:03.665522+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 32653312 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:04.665710+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613f9f3c000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 32645120 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fa2f3000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:05.665925+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 32645120 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:06.666104+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203798 data_alloc: 218103808 data_used: 8138752
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 32636928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa78d000/0x0/0x4ffc00000, data 0xa0577a/0xacf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:07.666516+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 32636928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa78d000/0x0/0x4ffc00000, data 0xa0577a/0xacf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:08.666707+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 32636928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:09.666918+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa78d000/0x0/0x4ffc00000, data 0xa0577a/0xacf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117972992 unmapped: 32636928 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:10.667135+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 32628736 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:11.667311+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fb2cbc20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fa2f3000 session 0x5613fb6861e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203798 data_alloc: 218103808 data_used: 8138752
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117981184 unmapped: 32628736 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa78d000/0x0/0x4ffc00000, data 0xa0577a/0xacf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.843708038s of 21.802885056s, submitted: 390
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba13860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:12.667524+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33423360 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:13.667673+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33423360 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:14.667825+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33423360 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:15.668005+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33423360 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:16.668217+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193682 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 33423360 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:17.668402+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33415168 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:18.668586+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33415168 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:19.668857+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33415168 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:20.669034+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33415168 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:21.669236+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193682 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 33415168 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:22.669370+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 33406976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:23.669498+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 33406976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:24.669716+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 33406976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:25.669921+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 33406976 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:26.670103+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193682 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 33398784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:27.670256+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 33398784 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.850473404s of 15.884275436s, submitted: 10
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fb5c9c20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613f9068f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90000 session 0x5613fc73da40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc42da40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fa2f3000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fa2f3000 session 0x5613fb5c81e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:28.670425+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 33226752 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:29.670605+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 33226752 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:30.670824+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa5f8000/0x0/0x4ffc00000, data 0xb9b76c/0xc64000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 33226752 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:31.671081+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1215738 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117383168 unmapped: 33226752 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:32.671244+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613fa1ff0e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 33218560 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:33.671430+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613f88c0f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 33218560 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe90400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe90400 session 0x5613fc02b0e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:34.671594+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613f9dd34a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91800 session 0x5613fb47bc20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbe91000 session 0x5613fc3e7860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 33177600 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fa2f3000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf56400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:35.671864+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 33177600 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa5f8000/0x0/0x4ffc00000, data 0xb9b76c/0xc64000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:36.672032+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224447 data_alloc: 218103808 data_used: 8675328
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 33177600 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:37.672199+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 33177600 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa5f8000/0x0/0x4ffc00000, data 0xb9b76c/0xc64000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:38.672406+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 33177600 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa5f8000/0x0/0x4ffc00000, data 0xb9b76c/0xc64000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:39.672578+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fa2f3000 session 0x5613fc5eda40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.628873825s of 11.685560226s, submitted: 14
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf56400 session 0x5613faef61e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117432320 unmapped: 33177600 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb3acb40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:40.672759+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:41.672934+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197539 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:42.673095+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:43.673265+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:44.673444+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:45.673672+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fa2f3000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:46.673865+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197671 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:47.674064+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:48.674239+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:49.674421+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:50.674599+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:51.674850+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197671 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:52.675066+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:53.675256+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:54.675414+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:55.675584+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:56.675756+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197671 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:57.675944+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 33161216 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.679887772s of 18.761703491s, submitted: 27
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:58.676106+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 33153024 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:13:59.676328+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 33153024 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:00.676532+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 33153024 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:01.676727+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197539 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 33153024 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:02.676888+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 33153024 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:03.677082+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 33153024 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:04.677251+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 33144832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:05.677445+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 33144832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:06.677599+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197539 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 33144832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:07.677788+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 33144832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:08.677966+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 33144832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:09.678191+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 117465088 unmapped: 33144832 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:10.678377+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 33759232 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:11.678583+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197539 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 33759232 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:12.678768+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: mgrc ms_handle_reset ms_handle_reset con 0x5613f9046800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2113101694
Dec 07 10:28:48 compute-1 ceph-osd[77581]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2113101694,v1:192.168.122.100:6801/2113101694]
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: get_auth_request con 0x5613faf56400 auth_method 0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: mgrc handle_mgr_configure stats_period=5
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 33636352 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:13.678942+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faeeb000 session 0x5613f9162000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 33636352 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbe91000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fafbfc00 session 0x5613fc214780
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf5dc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:14.679086+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 33636352 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:15.679280+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 33636352 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:16.679452+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fafbfc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.037788391s of 18.041795731s, submitted: 1
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fafbfc00 session 0x5613fc7561e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262910 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fc02b2c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbf33c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbf33c00 session 0x5613fc3e74a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32f400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32f400 session 0x5613fc3e61e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118349824 unmapped: 32260096 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc3e6780
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:17.679709+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:18.680054+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2e000/0x0/0x4ffc00000, data 0x126576c/0x132e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:19.680249+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:20.680466+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2e000/0x0/0x4ffc00000, data 0x126576c/0x132e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:21.680647+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262910 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf57c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf57c00 session 0x5613fb2cbe00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:22.680786+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fafbfc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fafbfc00 session 0x5613fba12780
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fbf33c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fbf33c00 session 0x5613fba123c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:23.680923+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613f9dd21e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2e000/0x0/0x4ffc00000, data 0x126576c/0x132e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118366208 unmapped: 32243712 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:24.681068+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 118374400 unmapped: 32235520 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2d000/0x0/0x4ffc00000, data 0x126577c/0x132f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:25.681199+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 120963072 unmapped: 29646848 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:26.681323+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327280 data_alloc: 234881024 data_used: 16920576
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:27.681510+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:28.681678+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:29.681882+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:30.682061+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2d000/0x0/0x4ffc00000, data 0x126577c/0x132f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:31.682247+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327280 data_alloc: 234881024 data_used: 16920576
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:32.682422+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2d000/0x0/0x4ffc00000, data 0x126577c/0x132f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:33.682661+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:34.682872+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 29417472 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:35.683070+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.072046280s of 19.421592712s, submitted: 21
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9f2d000/0x0/0x4ffc00000, data 0x126577c/0x132f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 26525696 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:36.683267+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361910 data_alloc: 234881024 data_used: 17256448
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9aa6000/0x0/0x4ffc00000, data 0x16e677c/0x17b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125386752 unmapped: 25223168 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:37.683509+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:38.683711+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a9e000/0x0/0x4ffc00000, data 0x16ec77c/0x17b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:39.683938+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a8e000/0x0/0x4ffc00000, data 0x16f677c/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a8e000/0x0/0x4ffc00000, data 0x16f677c/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:40.684156+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:41.684369+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 234881024 data_used: 17215488
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a8e000/0x0/0x4ffc00000, data 0x16f677c/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:42.684546+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:43.684728+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a8e000/0x0/0x4ffc00000, data 0x16f677c/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:44.684918+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:45.685128+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:46.685367+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1367884 data_alloc: 234881024 data_used: 17215488
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:47.685537+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a8e000/0x0/0x4ffc00000, data 0x16f677c/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:48.685757+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9a8e000/0x0/0x4ffc00000, data 0x16f677c/0x17c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:49.686016+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126009344 unmapped: 24600576 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:50.686228+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.644939423s of 14.761515617s, submitted: 44
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613fba12b40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba13c20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50800 session 0x5613fc73d0e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:51.686400+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205509 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:52.686601+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:53.686789+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:54.686891+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:55.687035+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:56.687217+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205509 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:57.687409+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:58.687597+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:14:59.687870+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:00.688004+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:01.688204+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205509 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:02.688396+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:03.688600+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:04.688783+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:05.689000+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:06.689168+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205509 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:07.689378+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:08.689521+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:09.689766+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 29237248 heap: 150609920 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:10.689976+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50000 session 0x5613fba0e3c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf51400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf51400 session 0x5613fba0fe00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba0f0e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50000 session 0x5613fba0e5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.314212799s of 20.416582108s, submitted: 31
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50800 session 0x5613fba0f4a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613f9069a40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc100800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 33390592 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc100800 session 0x5613fc6c2b40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc3e7680
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:11.690126+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50000 session 0x5613fc73d0e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260980 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08c000/0x0/0x4ffc00000, data 0x110776c/0x11d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 33390592 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:12.690377+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 33390592 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:13.690565+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08c000/0x0/0x4ffc00000, data 0x110776c/0x11d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 33390592 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:14.690752+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 33390592 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50800 session 0x5613fc73c5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:15.690914+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613fba0e3c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 33406976 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:16.691074+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4f000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4f000 session 0x5613fba0fe00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fba0e5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264291 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 33398784 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:17.691261+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08b000/0x0/0x4ffc00000, data 0x110777c/0x11d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 33382400 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:18.691453+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:19.691704+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:20.691853+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08b000/0x0/0x4ffc00000, data 0x110777c/0x11d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:21.692055+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317739 data_alloc: 234881024 data_used: 15486976
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:22.692229+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:23.692453+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08b000/0x0/0x4ffc00000, data 0x110777c/0x11d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:24.692655+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:25.692843+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122617856 unmapped: 32194560 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:26.692980+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08b000/0x0/0x4ffc00000, data 0x110777c/0x11d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613fba0e1e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4ac00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4ac00 session 0x5613fc756f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317739 data_alloc: 234881024 data_used: 15486976
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc101400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc101400 session 0x5613fa1ff2c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9046c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9046c00 session 0x5613fba0c5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 122634240 unmapped: 32178176 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.222948074s of 16.339372635s, submitted: 24
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:27.693122+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613f9162960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4ac00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4ac00 session 0x5613f9cb7a40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613fb494f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc101400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc101400 session 0x5613fc49da40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fafbf800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fafbf800 session 0x5613fb2cab40
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa08b000/0x0/0x4ffc00000, data 0x110777c/0x11d1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123207680 unmapped: 31604736 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:28.693253+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 30179328 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:29.693394+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 25427968 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:30.693603+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9119000/0x0/0x4ffc00000, data 0x20787de/0x2143000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc02be00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 25485312 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:31.693883+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4ac00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4ac00 session 0x5613faef7860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1440180 data_alloc: 234881024 data_used: 15785984
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fafbf800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fafbf800 session 0x5613fc49c780
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fb645800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 25468928 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:32.694052+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fb645800 session 0x5613f95590e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 25600000 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:33.694239+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc101400
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32fc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 25600000 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:34.694413+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 129867776 unmapped: 24944640 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:35.694749+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132448256 unmapped: 22364160 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:36.694943+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90e7000/0x0/0x4ffc00000, data 0x20a8811/0x2175000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493907 data_alloc: 234881024 data_used: 23171072
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90e7000/0x0/0x4ffc00000, data 0x20a8811/0x2175000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 22331392 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:37.695198+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90e7000/0x0/0x4ffc00000, data 0x20a8811/0x2175000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 22331392 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:38.695473+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 22331392 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:39.695681+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 22331392 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:40.695818+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 22315008 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:41.696045+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493907 data_alloc: 234881024 data_used: 23171072
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:42.696237+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 22315008 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:43.696387+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 22315008 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90e7000/0x0/0x4ffc00000, data 0x20a8811/0x2175000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:44.696593+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 22315008 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f90e7000/0x0/0x4ffc00000, data 0x20a8811/0x2175000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:45.696737+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 22315008 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.218862534s of 18.503250122s, submitted: 89
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:46.696862+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 19062784 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572493 data_alloc: 234881024 data_used: 23388160
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:47.697066+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 18415616 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:48.697251+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f86e4000/0x0/0x4ffc00000, data 0x2aab811/0x2b78000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:49.697517+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:50.697761+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f86e4000/0x0/0x4ffc00000, data 0x2aab811/0x2b78000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:51.697950+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1577791 data_alloc: 234881024 data_used: 23384064
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:52.698136+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f86e4000/0x0/0x4ffc00000, data 0x2aab811/0x2b78000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:53.698398+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:54.698694+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:55.699006+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:56.699152+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1577479 data_alloc: 234881024 data_used: 23388160
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:57.699333+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:58.700315+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 18382848 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f86c0000/0x0/0x4ffc00000, data 0x2acf811/0x2b9c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.937669754s of 13.194371223s, submitted: 97
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:15:59.701372+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 18341888 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc101400 session 0x5613fa2130e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32fc00 session 0x5613fc2aa000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:00.701516+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 19161088 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc6c2f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:01.701863+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 24215552 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395812 data_alloc: 234881024 data_used: 15728640
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:02.702099+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 24215552 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:03.702347+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 24215552 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50000 session 0x5613fba0e960
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50800 session 0x5613fa212f00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4ac00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:04.702557+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 24207360 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9446000/0x0/0x4ffc00000, data 0x193c77c/0x1a06000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [1,0,0,2])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4ac00 session 0x5613f9558d20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:05.702776+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:06.702960+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228937 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:07.703187+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:08.703482+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:09.703758+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:10.703978+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:11.704152+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228937 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:12.704419+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:13.704574+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:14.704831+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:15.705054+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:16.705400+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228937 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:17.705610+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:18.705839+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:19.706177+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:20.706425+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:21.706767+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228937 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:22.706997+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:23.707199+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:24.707436+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:25.707720+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 123895808 unmapped: 30916608 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4ac00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:26.707965+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.124614716s of 27.378499985s, submitted: 88
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4ac00 session 0x5613fac71860
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fe000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fb3ad680
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50000 session 0x5613fb3adc20
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50800 session 0x5613f9dd21e0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613fc32fc00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613fc32fc00 session 0x5613fc73c5a0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254370 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:27.708167+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:28.708338+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:29.708706+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1cd000/0x0/0x4ffc00000, data 0xbb57ce/0xc7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:30.708866+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:31.709060+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613f9e95c00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613f9e95c00 session 0x5613fc73cf00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1cd000/0x0/0x4ffc00000, data 0xbb57ce/0xc7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf4ac00
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50000
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1cd000/0x0/0x4ffc00000, data 0xbb57ce/0xc7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:32.709296+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254370 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 29704192 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:33.709486+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1cd000/0x0/0x4ffc00000, data 0xbb57ce/0xc7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125280256 unmapped: 29532160 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:34.709686+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:35.709876+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:36.710091+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:37.710374+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269266 data_alloc: 234881024 data_used: 9789440
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:38.710547+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1cd000/0x0/0x4ffc00000, data 0xbb57ce/0xc7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:39.710750+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:40.710999+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:41.711160+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:42.711325+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269266 data_alloc: 234881024 data_used: 9789440
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa1cd000/0x0/0x4ffc00000, data 0xbb57ce/0xc7f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 29515776 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:43.711538+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.777629852s of 17.851564407s, submitted: 26
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 28114944 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:44.711770+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128819200 unmapped: 25993216 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:45.712041+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b37000/0x0/0x4ffc00000, data 0x123d7ce/0x1307000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [0,0,1])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:46.712212+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:47.712385+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336436 data_alloc: 234881024 data_used: 10362880
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:48.712588+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:49.712864+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b01000/0x0/0x4ffc00000, data 0x12797ce/0x1343000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:50.713112+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:51.713338+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b01000/0x0/0x4ffc00000, data 0x12797ce/0x1343000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b01000/0x0/0x4ffc00000, data 0x12797ce/0x1343000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:52.713478+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330852 data_alloc: 234881024 data_used: 10366976
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x127c7ce/0x1346000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:53.713713+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:54.713895+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:55.714081+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:56.714300+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x127c7ce/0x1346000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:57.714541+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330852 data_alloc: 234881024 data_used: 10366976
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:58.714730+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:16:59.714964+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4f9b06000/0x0/0x4ffc00000, data 0x127c7ce/0x1346000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 128393216 unmapped: 26419200 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:00.715196+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.237125397s of 16.531042099s, submitted: 113
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf4ac00 session 0x5613fc73c3c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50000 session 0x5613fba0c3c0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: handle_auth_request added challenge on 0x5613faf50800
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:01.715359+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 ms_handle_reset con 0x5613faf50800 session 0x5613fc73d680
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:02.715549+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:03.715744+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:04.715938+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:05.716141+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:06.716409+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:07.716587+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:08.716816+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:09.717108+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:10.717291+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:11.717471+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:12.717677+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:13.717872+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:14.718011+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:15.718179+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:16.718445+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:17.718606+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:18.718860+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:19.719042+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:20.719221+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:21.719377+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:22.719700+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:23.719917+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:24.720082+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:25.720267+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:26.720495+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:27.720806+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:28.720976+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:29.721147+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:30.721364+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:31.721507+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:32.737722+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:33.737889+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:34.738089+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:35.738276+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:36.738519+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:37.738738+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:38.738901+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126361600 unmapped: 28450816 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:39.739089+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:40.739249+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:41.739485+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:42.739688+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:43.739875+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:44.740063+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:45.740279+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:46.740472+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:47.740645+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:48.740815+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:49.741086+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126377984 unmapped: 28434432 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:50.741239+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:51.741464+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:52.741672+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:53.741812+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:54.741989+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:55.742153+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:56.742292+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126386176 unmapped: 28426240 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:57.742473+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 28418048 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:58.742657+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 28418048 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:17:59.742891+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 28418048 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:00.743090+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 28418048 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:01.743259+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 28418048 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:02.743495+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126394368 unmapped: 28418048 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:03.743663+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:04.743802+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:05.744021+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:06.744263+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:07.744468+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:08.744689+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:09.744896+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:10.745067+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:11.745253+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:12.745484+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:13.745707+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:14.745885+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:15.746062+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:16.746235+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:17.746450+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:18.746688+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 28409856 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:19.746881+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126410752 unmapped: 28401664 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:20.747020+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126410752 unmapped: 28401664 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:21.747165+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126410752 unmapped: 28401664 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:22.747341+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126410752 unmapped: 28401664 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:23.747492+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'config diff' '{prefix=config diff}'
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'config show' '{prefix=config show}'
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126369792 unmapped: 28442624 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'counter dump' '{prefix=counter dump}'
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'counter schema' '{prefix=counter schema}'
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:24.747664+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126091264 unmapped: 28721152 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:25.747791+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126205952 unmapped: 28606464 heap: 154812416 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'log dump' '{prefix=log dump}'
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:26.747925+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'perf dump' '{prefix=perf dump}'
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126279680 unmapped: 39575552 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'perf schema' '{prefix=perf schema}'
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:27.748064+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 39829504 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:28.748220+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 39829504 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:29.748391+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126033920 unmapped: 39821312 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:30.748664+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126033920 unmapped: 39821312 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:31.748825+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126033920 unmapped: 39821312 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:32.748991+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126033920 unmapped: 39821312 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:33.749116+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:34.749283+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:35.749469+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:36.749655+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:37.749871+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:38.750013+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:39.750178+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:40.750312+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:41.750450+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:42.750594+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:43.750724+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:44.750904+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:45.751078+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:46.751248+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:47.751386+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:48.751605+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:49.751845+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126042112 unmapped: 39813120 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:50.752090+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:51.752253+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:52.752409+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:53.752576+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:54.752727+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:55.753569+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:56.753706+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:57.753873+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:58.754510+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:18:59.754769+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:00.754985+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:01.755191+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:02.755397+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:03.755701+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:04.755894+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:05.756100+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126050304 unmapped: 39804928 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:06.756311+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:07.756530+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:08.756761+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:09.757001+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:10.757232+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:11.757478+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:12.757705+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:13.757951+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:14.758140+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:15.758329+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:16.758512+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:17.758670+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:18.758861+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:19.759105+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:20.759262+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:21.759473+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 39796736 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:22.759646+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:23.759874+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:24.760039+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:25.760208+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:26.760392+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:27.760567+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:28.760800+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:29.761982+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:30.762361+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:31.762570+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:32.762825+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:33.763068+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:34.763224+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:35.763408+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:36.763861+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:37.764015+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 39788544 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:38.764164+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126074880 unmapped: 39780352 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:39.764366+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126074880 unmapped: 39780352 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:40.764524+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 40697856 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:41.764697+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 40697856 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:42.764846+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 40697856 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:43.765001+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 40697856 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:44.765163+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 40697856 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:45.765328+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 40697856 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:46.765649+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 40697856 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:47.765820+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125157376 unmapped: 40697856 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:48.766141+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:49.766470+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:50.766700+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:51.766964+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:52.767224+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:53.767489+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:54.767693+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:55.767949+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:56.768114+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:57.768321+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:58.768602+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:19:59.768980+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:00.769237+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:01.769462+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:02.769673+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:03.769961+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:04.770203+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:05.770425+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:06.770728+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:07.770965+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:08.771128+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:09.771365+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:10.771564+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:11.771794+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 40689664 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:12.772057+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:13.772207+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 3974 syncs, 3.44 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2601 writes, 8027 keys, 2601 commit groups, 1.0 writes per commit group, ingest: 7.55 MB, 0.01 MB/s
                                           Interval WAL: 2601 writes, 1129 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:14.772441+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:15.772614+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:16.772806+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:17.773072+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:18.773230+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:19.773486+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:20.773716+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:21.774033+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:22.774210+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:23.774453+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:24.774714+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:25.774913+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:26.775060+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:27.775265+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:28.775460+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125173760 unmapped: 40681472 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:29.775666+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:30.775832+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:31.776040+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:32.776263+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:33.776497+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:34.776713+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:35.776899+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:36.777101+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:37.777278+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:38.777427+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:39.778085+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:40.778274+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:41.778461+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:42.778720+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:43.778868+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:44.779079+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:45.779346+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:46.779578+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:47.779812+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 40673280 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:48.780041+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 40665088 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:49.780278+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 40665088 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:50.780479+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 40665088 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:51.780748+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 40665088 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:52.780957+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 40665088 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:53.781115+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 40665088 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:54.781249+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 40665088 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:55.781445+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 40665088 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:56.781609+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 40665088 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:57.781814+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 40665088 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:58.782009+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:20:59.782248+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:00.782419+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:01.783720+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:02.783898+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:03.784091+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:04.784275+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:05.784444+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:06.784599+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:07.784830+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:08.785233+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:09.785462+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:10.785680+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:11.785905+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:12.786102+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:13.786328+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:14.786504+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:15.786739+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:16.786948+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125198336 unmapped: 40656896 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:17.787900+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:18.788084+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:19.788356+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:20.788557+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:21.788754+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:22.788931+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:23.789098+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:24.789279+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:25.789483+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:26.789717+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:27.789886+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:28.790063+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:29.790294+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:30.790546+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:31.790757+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:32.790958+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:33.791160+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 40648704 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:34.791378+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:35.791700+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:36.791890+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:37.792140+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:38.792288+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:39.792508+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:40.792719+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:41.792935+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:42.793094+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:43.793242+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:44.793480+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:45.793726+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:46.793933+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:47.794080+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:48.794241+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:49.794504+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:50.794727+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:51.794926+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125214720 unmapped: 40640512 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:52.795134+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:53.795345+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:54.795555+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:55.795764+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:56.795909+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:57.796037+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:58.796203+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:21:59.796447+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:00.796685+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:01.796897+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:02.797105+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:03.797333+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:04.797559+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 40632320 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:05.797734+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:06.797919+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:07.798125+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:08.798308+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:09.798508+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:10.798673+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:11.798833+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:12.798993+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:13.799149+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:14.799389+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:15.799599+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:16.799852+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:17.800026+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 40624128 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:18.800229+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:19.800684+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:20.800947+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:21.801188+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:22.801424+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:23.801731+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:24.801954+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:25.802137+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:26.802297+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:27.802494+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:28.802736+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:29.803043+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:30.803244+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:31.803481+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:32.803765+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:33.803978+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125239296 unmapped: 40615936 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:34.804149+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:35.804328+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:36.804548+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:37.804722+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:38.805670+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:39.805957+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:40.806198+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:41.806457+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:42.806747+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:43.806948+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:44.807122+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:45.807331+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:46.807548+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:47.807733+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fa3fd000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:48.807959+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239802 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:49.808177+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 40607744 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 349.483520508s of 349.566741943s, submitted: 28
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:50.808386+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 125394944 unmapped: 40460288 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:51.808687+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126615552 unmapped: 39239680 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:52.808895+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126648320 unmapped: 39206912 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:53.809142+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126656512 unmapped: 39198720 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:54.809334+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126664704 unmapped: 39190528 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:55.809465+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126664704 unmapped: 39190528 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:56.809675+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126664704 unmapped: 39190528 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:57.809869+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126664704 unmapped: 39190528 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:58.810063+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126672896 unmapped: 39182336 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:22:59.810272+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126672896 unmapped: 39182336 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:00.810513+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126672896 unmapped: 39182336 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:01.810686+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126672896 unmapped: 39182336 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:02.810919+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126672896 unmapped: 39182336 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:03.811091+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126672896 unmapped: 39182336 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:04.811359+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126672896 unmapped: 39182336 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:05.811569+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126672896 unmapped: 39182336 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:06.811809+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126681088 unmapped: 39174144 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:07.812030+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126681088 unmapped: 39174144 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:08.812274+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 39165952 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:09.812591+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 39165952 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:10.812888+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 39165952 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:11.813057+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 39165952 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:12.813269+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 39165952 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:13.813512+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126689280 unmapped: 39165952 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:14.813731+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 39157760 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:15.813905+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 39157760 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:16.814109+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 39157760 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:17.814282+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 39157760 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:18.814466+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 39157760 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:19.814682+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 39157760 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:20.814876+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 39149568 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:21.815036+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 39149568 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:22.815251+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 39149568 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:23.815421+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126705664 unmapped: 39149568 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:24.815679+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:25.815874+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:26.816065+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:27.816273+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:28.816501+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:29.816726+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:30.816905+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:31.817097+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:32.817349+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:33.817595+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:34.817883+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:35.818109+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:36.818309+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:37.818466+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126713856 unmapped: 39141376 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:38.818702+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 39133184 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:39.818946+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 39133184 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:40.819155+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 39133184 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:41.819323+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 39133184 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:42.819489+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 39133184 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:43.819722+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 39133184 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:44.819926+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 39133184 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:45.820139+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126722048 unmapped: 39133184 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:46.820312+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:47.820508+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:48.820710+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:49.820958+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:50.821127+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:51.821355+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:52.821700+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:53.821902+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:54.822143+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:55.822336+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:56.822546+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:57.822701+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:58.822902+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:23:59.823197+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:00.823377+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:01.823602+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:02.823908+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:03.824116+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:04.824358+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:05.824573+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:06.824788+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:07.825047+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:08.825219+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:09.825442+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:10.825689+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:11.825859+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:12.826059+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:13.826250+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:14.826422+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:15.826681+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:16.826905+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:17.827146+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:18.827548+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:19.827884+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:20.828061+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:21.828283+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:22.828500+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:23.828702+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:24.828915+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:25.829095+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:26.829300+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:27.829543+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:28.829750+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:29.829929+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:30.830079+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:31.830235+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:32.830417+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:33.830551+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:34.830743+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:35.830930+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:36.831075+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:37.831259+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:38.831461+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:39.831686+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:40.831855+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:41.832027+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:42.832176+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:43.832371+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:44.832546+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:45.832785+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:46.833048+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:47.833225+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:48.833794+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:49.833992+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:50.834195+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:51.834353+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:52.834702+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:53.834842+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:54.834984+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:55.835250+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:56.835421+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:57.835613+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:58.835809+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:24:59.836160+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:00.836408+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:01.836836+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:02.836962+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:03.837110+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:04.837308+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:05.837509+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:06.837690+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:07.837908+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:08.838065+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:09.838285+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:10.838500+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:11.838709+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 39124992 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:12.838931+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:13.839111+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets getting new tickets!
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:14.839383+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _finish_auth 0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:14.845738+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:15.839558+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:16.839754+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:17.839943+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:18.840135+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:19.840341+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:20.840485+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:21.840681+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:22.840853+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:23.841031+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:24.841238+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:25.841414+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:26.841584+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:27.841808+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:28.841996+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:29.842261+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:30.842491+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:31.842709+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126738432 unmapped: 39116800 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:32.842966+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:33.843165+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:34.843386+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:35.843683+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:36.843948+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:37.844229+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:38.844497+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:39.844738+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:40.844979+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:41.845212+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:42.845410+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:43.845612+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:44.845892+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:45.846093+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:46.846322+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126746624 unmapped: 39108608 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:47.846491+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:48.846675+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:49.846876+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:50.847018+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:51.849109+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:52.849274+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:53.849509+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:54.849750+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:55.849957+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:56.850190+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:57.850389+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:58.850600+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:25:59.850914+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:00.851108+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:01.851380+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 39100416 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:02.851571+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:03.851706+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:04.851857+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:05.852059+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:06.852205+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:07.852392+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:08.852593+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:09.852947+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:10.853165+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:11.853381+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:12.853569+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:13.853793+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:14.854006+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:15.854176+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:16.854488+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:17.854719+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126763008 unmapped: 39092224 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:18.854943+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 39084032 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:19.855165+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 39084032 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:20.855397+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 39084032 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:21.855608+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 39084032 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:22.855867+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 39084032 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:23.856030+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 39084032 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:24.856247+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 39084032 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:25.856447+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 39084032 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:26.856725+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 39084032 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:27.856999+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 39084032 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:28.857256+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 39084032 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:29.857541+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:30.857672+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:31.857895+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:32.858122+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:33.858370+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:34.858720+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:35.858934+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:36.859169+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:37.859421+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:38.859663+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:39.859999+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:40.860254+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:41.860543+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126779392 unmapped: 39075840 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:42.860844+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 39067648 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:43.861047+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 39067648 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:44.861237+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 39067648 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:45.863724+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 39067648 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:46.864137+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 39067648 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:47.864305+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 39067648 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:48.864559+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 39067648 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:49.864787+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126787584 unmapped: 39067648 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:50.864959+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:51.865188+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:52.865396+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:53.865555+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:54.865690+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:55.865847+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:56.866033+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:57.866162+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:58.866347+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:26:59.866495+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:00.866716+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:01.866882+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:02.867113+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:03.867287+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:04.867402+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:05.867586+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 39059456 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:06.867799+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:07.868017+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:08.868239+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:09.868492+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:10.868665+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:11.868849+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:12.869038+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:13.869267+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:14.869448+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:15.869707+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:16.869867+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:17.869996+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:18.870156+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:19.870346+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:20.870490+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:21.870673+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 39051264 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:22.870876+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 39043072 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:23.871094+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 39043072 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:24.871283+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 39043072 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:25.871479+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 39043072 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:26.871644+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 39043072 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:27.871815+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 39043072 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:28.871972+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 39043072 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:29.872199+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126812160 unmapped: 39043072 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:30.872436+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:31.872606+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:32.872757+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:33.872894+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:34.873059+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:35.873253+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:36.873445+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:37.873691+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:38.873859+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:39.874053+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:40.874256+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:41.874402+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:42.874557+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:43.874765+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:44.874949+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:45.875169+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126820352 unmapped: 39034880 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:46.875385+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:47.875548+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:48.875720+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:49.875897+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:50.876101+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:51.876296+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:52.876517+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:53.876723+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:54.876916+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:55.877038+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:56.877190+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:57.877360+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:58.877568+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:27:59.877925+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126828544 unmapped: 39026688 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:00.878113+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:01.878353+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:02.878557+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:03.878799+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:04.878946+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:05.879074+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:06.879192+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:07.879305+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:08.879529+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:09.879777+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:10.879928+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:11.880047+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:12.880152+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:13.880272+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126836736 unmapped: 39018496 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:14.880378+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126861312 unmapped: 38993920 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'config diff' '{prefix=config diff}'
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 07 10:28:48 compute-1 ceph-osd[77581]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 07 10:28:48 compute-1 ceph-osd[77581]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239510 data_alloc: 218103808 data_used: 7614464
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:15.880532+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'config show' '{prefix=config show}'
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'counter dump' '{prefix=counter dump}'
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'counter schema' '{prefix=counter schema}'
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 126697472 unmapped: 39157760 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:16.880724+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: osd.1 156 heartbeat osd_stat(store_statfs(0x4fb41e000/0x0/0x4ffc00000, data 0x98576c/0xa4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x3d8f9c5), peers [0,2] op hist [])
Dec 07 10:28:48 compute-1 ceph-osd[77581]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 38813696 heap: 165855232 old mem: 2845415833 new mem: 2845415833
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: tick
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_tickets
Dec 07 10:28:48 compute-1 ceph-osd[77581]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-07T10:28:17.880840+0000)
Dec 07 10:28:48 compute-1 ceph-osd[77581]: do_command 'log dump' '{prefix=log dump}'
Dec 07 10:28:48 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 07 10:28:48 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154921853' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:28:48 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 07 10:28:48 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 07 10:28:48 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2066771847' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.27554 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.18744 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3801422216' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1972941202' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2405987548' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.27572 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4067278838' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.18762 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.27709 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2021840992' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/4154921853' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.27596 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2709586431' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 07 10:28:48 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2066771847' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:28:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 07 10:28:49 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/649111588' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:28:49 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:49 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:49 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:49.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:49 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 07 10:28:49 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3573555626' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.27730 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.18777 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.27614 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2329416046' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.18798 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.27742 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3008929057' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/649111588' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.18816 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/693975908' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.18819 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.27757 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: pgmap v1405: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3760093307' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 07 10:28:49 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3573555626' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:28:49 compute-1 crontab[253536]: (root) LIST (root)
Dec 07 10:28:50 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 07 10:28:50 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4239211672' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 07 10:28:50 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:50 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:50 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:50.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:50 compute-1 podman[253641]: 2025-12-07 10:28:50.635409513 +0000 UTC m=+0.130655217 container health_status 8821009fc09e1b0f47c8f713b19b3050cc76859f38c6319b8e88434de76b26d8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.27653 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.18837 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.27769 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2292681228' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/4239211672' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.27674 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.18852 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2718751001' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.27781 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3522338217' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.27698 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.18876 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:50 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/541176876' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 07 10:28:51 compute-1 sudo[253717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 07 10:28:51 compute-1 sudo[253717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:28:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 07 10:28:51 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1736247674' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 07 10:28:51 compute-1 sudo[253717]: pam_unix(sudo:session): session closed for user root
Dec 07 10:28:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 07 10:28:51 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/599791771' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 07 10:28:51 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:51 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:51 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:51.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 07 10:28:51 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3430379821' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 07 10:28:51 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/866675339' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.27793 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.27710 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2167255867' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1736247674' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.18894 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/746512737' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.27805 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.27725 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/599791771' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3211362347' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2445026552' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: pgmap v1406: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.27817 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3430379821' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 07 10:28:51 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/866675339' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 07 10:28:52 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:52 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:52 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:52.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 07 10:28:52 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3645052527' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 07 10:28:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 07 10:28:52 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1206237869' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 07 10:28:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 07 10:28:52 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3194856777' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 07 10:28:52 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 07 10:28:52 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3940187881' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/656323754' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1820247150' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3128542265' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.27832 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3645052527' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/109711398' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1206237869' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3864235059' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1311189908' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.27844 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3194856777' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2026681969' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3940187881' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1961768926' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2978152689' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/80663034' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 07 10:28:53 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3354064538' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 07 10:28:53 compute-1 systemd[1]: Starting Hostname Service...
Dec 07 10:28:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 07 10:28:53 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2935922756' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 07 10:28:53 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/401986064' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 07 10:28:53 compute-1 systemd[1]: Started Hostname Service.
Dec 07 10:28:53 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:53 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:53 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:53.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 07 10:28:53 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1956519028' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 07 10:28:53 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 07 10:28:53 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1238877577' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:28:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 07 10:28:54 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3072252731' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3354064538' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1472621041' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1876030530' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2935922756' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/401986064' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2716606583' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3432126235' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3430701624' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: pgmap v1407: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1956519028' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/506392686' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1238877577' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3652436007' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2844124805' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 07 10:28:54 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:54 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:54 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:54.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 07 10:28:54 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3033392073' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 07 10:28:54 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 07 10:28:54 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/814224297' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.19044 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3072252731' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1861840649' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3033392073' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3849601122' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/4137311229' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.19056 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.27899 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.19065 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2751696477' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/814224297' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1840971482' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 07 10:28:55 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2464388625' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 07 10:28:55 compute-1 sudo[254385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 07 10:28:55 compute-1 sudo[254385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:28:55 compute-1 sudo[254385]: pam_unix(sudo:session): session closed for user root
Dec 07 10:28:55 compute-1 sudo[254438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/75f4c9fd-539a-5e17-b55a-0a12a4e2736c/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 07 10:28:55 compute-1 sudo[254438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:28:55 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:55 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:28:55 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:28:55 compute-1 sudo[254438]: pam_unix(sudo:session): session closed for user root
Dec 07 10:28:55 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 07 10:28:55 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1668684207' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='client.27923 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='client.27929 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='client.27935 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/455477575' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3143688639' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2336379034' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='client.19101 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='client.27950 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: pgmap v1408: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/1171380163' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2405355561' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1668684207' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 07 10:28:56 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 07 10:28:56 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:56 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:56 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:56.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Dec 07 10:28:56 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1099839321' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 07 10:28:56 compute-1 podman[254675]: 2025-12-07 10:28:56.5940548 +0000 UTC m=+0.086182757 container health_status 0ebb3721255004696edc9058313c1ab1e77a67317f1a0082a006a6916e80a193 (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 07 10:28:56 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 07 10:28:56 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1033759809' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.19119 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.19131 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.27964 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: pgmap v1409: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.27973 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.27983 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.19146 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/3506297419' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.27985 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1099839321' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.27991 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.27998 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.19167 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1425960593' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1033759809' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:28:57 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 07 10:28:57 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3826927129' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:28:57 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:57 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:28:57 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:57.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:28:57 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:28:57 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:28:58 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:28:58 compute-1 sshd-session[254899]: Invalid user postgres from 104.248.193.130 port 55116
Dec 07 10:28:58 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:58 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:28:58 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:28:58.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:28:58 compute-1 sshd-session[254899]: Connection closed by invalid user postgres 104.248.193.130 port 55116 [preauth]
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='client.28006 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='client.28019 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='client.19185 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/455351589' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3826927129' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='client.28021 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='client.28043 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/897309459' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1102083252' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:28:58 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2502797774' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:28:58 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:28:58 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 07 10:28:58 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4290902187' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 07 10:28:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Dec 07 10:28:59 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2201206812' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:28:59 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:28:59 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 07 10:28:59 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:28:59.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='client.28051 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: pgmap v1410: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='client.28078 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='client.19287 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/3460184863' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='client.28090 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/4290902187' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/712757737' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2201206812' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 07 10:28:59 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:28:59 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:28:59 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 07 10:28:59 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1096174901' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 07 10:29:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Dec 07 10:29:00 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3479458442' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 07 10:29:00 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:29:00 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 07 10:29:00 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:29:00.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 07 10:29:00 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 07 10:29:00 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2803143380' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 07 10:29:00 compute-1 ceph-mon[80077]: from='client.28136 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:29:00 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 07 10:29:00 compute-1 ceph-mon[80077]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 07 10:29:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/1096174901' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 07 10:29:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/1625633763' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 07 10:29:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/197825084' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 07 10:29:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3479458442' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 07 10:29:00 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2774890625' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 07 10:29:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 07 10:29:01 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3181316889' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 07 10:29:01 compute-1 sudo[255253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 07 10:29:01 compute-1 sudo[255253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 07 10:29:01 compute-1 sudo[255253]: pam_unix(sudo:session): session closed for user root
Dec 07 10:29:01 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:29:01 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:29:01 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.102 - anonymous [07/Dec/2025:10:29:01.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 07 10:29:01 compute-1 ceph-mon[80077]: pgmap v1411: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Dec 07 10:29:01 compute-1 ceph-mon[80077]: from='client.28156 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:29:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/2803143380' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 07 10:29:01 compute-1 ceph-mon[80077]: from='client.19359 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 07 10:29:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.101:0/3181316889' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 07 10:29:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2199589322' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 07 10:29:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.100:0/2879349268' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 07 10:29:01 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:29:01 compute-1 ceph-mon[80077]: from='mgr.14721 192.168.122.100:0/1155079041' entity='mgr.compute-0.dotugk' 
Dec 07 10:29:01 compute-1 ceph-mon[80077]: from='client.? 192.168.122.102:0/2718883627' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 07 10:29:01 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 07 10:29:01 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/488050472' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 07 10:29:02 compute-1 ceph-mon[80077]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 07 10:29:02 compute-1 ceph-mon[80077]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2143585817' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 07 10:29:02 compute-1 radosgw[84964]: ====== starting new request req=0x7fdbe82455d0 =====
Dec 07 10:29:02 compute-1 radosgw[84964]: ====== req done req=0x7fdbe82455d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 07 10:29:02 compute-1 radosgw[84964]: beast: 0x7fdbe82455d0: 192.168.122.100 - anonymous [07/Dec/2025:10:29:02.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
